00:00:00.000 Started by upstream project "autotest-per-patch" build number 132722 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.049 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.049 The recommended git tool is: git 00:00:00.050 using credential 00000000-0000-0000-0000-000000000002 00:00:00.055 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.084 Fetching changes from the remote Git repository 00:00:00.086 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.121 Using shallow fetch with depth 1 00:00:00.121 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.121 > git --version # timeout=10 00:00:00.163 > git --version # 'git version 2.39.2' 00:00:00.163 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.207 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.207 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.843 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.856 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.869 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.870 > git config core.sparsecheckout # timeout=10 00:00:03.899 > git read-tree -mu HEAD # timeout=10 00:00:03.917 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.940 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.941 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.028 [Pipeline] Start of Pipeline 00:00:04.042 [Pipeline] library 00:00:04.052 Loading library shm_lib@master 00:00:04.052 Library shm_lib@master is cached. Copying from home. 00:00:04.073 [Pipeline] node 00:00:04.085 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:04.087 [Pipeline] { 00:00:04.097 [Pipeline] catchError 00:00:04.098 [Pipeline] { 00:00:04.108 [Pipeline] wrap 00:00:04.116 [Pipeline] { 00:00:04.121 [Pipeline] stage 00:00:04.123 [Pipeline] { (Prologue) 00:00:04.142 [Pipeline] echo 00:00:04.143 Node: VM-host-SM0 00:00:04.150 [Pipeline] cleanWs 00:00:04.160 [WS-CLEANUP] Deleting project workspace... 00:00:04.160 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.165 [WS-CLEANUP] done 00:00:04.492 [Pipeline] setCustomBuildProperty 00:00:04.563 [Pipeline] httpRequest 00:00:04.933 [Pipeline] echo 00:00:04.934 Sorcerer 10.211.164.101 is alive 00:00:04.942 [Pipeline] retry 00:00:04.944 [Pipeline] { 00:00:04.954 [Pipeline] httpRequest 00:00:04.957 HttpMethod: GET 00:00:04.958 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.958 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.982 Response Code: HTTP/1.1 200 OK 00:00:04.982 Success: Status code 200 is in the accepted range: 200,404 00:00:04.983 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.036 [Pipeline] } 00:00:33.053 [Pipeline] // retry 00:00:33.060 [Pipeline] sh 00:00:33.367 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.444 [Pipeline] httpRequest 00:00:33.861 [Pipeline] echo 00:00:33.863 Sorcerer 10.211.164.101 is alive 00:00:33.871 [Pipeline] retry 00:00:33.873 [Pipeline] { 00:00:33.885 [Pipeline] httpRequest 00:00:33.891 HttpMethod: GET 00:00:33.891 URL: http://10.211.164.101/packages/spdk_6696ebaaea03ecbe501fdf56eaaaa813afbd3409.tar.gz 00:00:33.892 Sending request to url: http://10.211.164.101/packages/spdk_6696ebaaea03ecbe501fdf56eaaaa813afbd3409.tar.gz 00:00:33.903 Response Code: HTTP/1.1 200 OK 00:00:33.904 Success: Status code 200 is in the accepted range: 200,404 00:00:33.905 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_6696ebaaea03ecbe501fdf56eaaaa813afbd3409.tar.gz 00:04:13.585 [Pipeline] } 00:04:13.600 [Pipeline] // retry 00:04:13.607 [Pipeline] sh 00:04:13.890 + tar --no-same-owner -xf spdk_6696ebaaea03ecbe501fdf56eaaaa813afbd3409.tar.gz 00:04:17.267 [Pipeline] sh 00:04:17.544 + git -C spdk log --oneline -n5 00:04:17.545 6696ebaae util: keep track of nested child fd_groups 00:04:17.545 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:04:17.545 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:04:17.545 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:04:17.545 e2dfdf06c accel/mlx5: Register post_poller handler 00:04:17.562 [Pipeline] writeFile 00:04:17.575 [Pipeline] sh 00:04:17.850 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:17.862 [Pipeline] sh 00:04:18.142 + cat autorun-spdk.conf 00:04:18.142 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:18.142 SPDK_TEST_NVMF=1 00:04:18.142 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:18.142 SPDK_TEST_USDT=1 00:04:18.142 SPDK_TEST_NVMF_MDNS=1 00:04:18.142 SPDK_RUN_UBSAN=1 00:04:18.142 NET_TYPE=virt 00:04:18.142 SPDK_JSONRPC_GO_CLIENT=1 00:04:18.142 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:18.148 RUN_NIGHTLY=0 00:04:18.150 [Pipeline] } 00:04:18.164 [Pipeline] // stage 00:04:18.180 [Pipeline] stage 00:04:18.183 [Pipeline] { (Run VM) 00:04:18.197 [Pipeline] sh 00:04:18.476 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:18.476 + echo 'Start stage prepare_nvme.sh' 00:04:18.476 Start stage prepare_nvme.sh 00:04:18.476 + [[ -n 1 ]] 00:04:18.476 + disk_prefix=ex1 00:04:18.476 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:04:18.476 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:04:18.476 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:04:18.476 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:18.476 ++ SPDK_TEST_NVMF=1 00:04:18.476 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:18.477 ++ SPDK_TEST_USDT=1 00:04:18.477 ++ SPDK_TEST_NVMF_MDNS=1 00:04:18.477 ++ SPDK_RUN_UBSAN=1 00:04:18.477 ++ NET_TYPE=virt 00:04:18.477 ++ SPDK_JSONRPC_GO_CLIENT=1 00:04:18.477 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:18.477 ++ RUN_NIGHTLY=0 00:04:18.477 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:04:18.477 + nvme_files=() 00:04:18.477 + declare -A nvme_files 00:04:18.477 + backend_dir=/var/lib/libvirt/images/backends 00:04:18.477 + nvme_files['nvme.img']=5G 00:04:18.477 + nvme_files['nvme-cmb.img']=5G 00:04:18.477 + nvme_files['nvme-multi0.img']=4G 00:04:18.477 + nvme_files['nvme-multi1.img']=4G 00:04:18.477 + nvme_files['nvme-multi2.img']=4G 00:04:18.477 + nvme_files['nvme-openstack.img']=8G 00:04:18.477 + nvme_files['nvme-zns.img']=5G 00:04:18.477 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:18.477 + (( SPDK_TEST_FTL == 1 )) 00:04:18.477 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:18.477 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:18.477 + for nvme in "${!nvme_files[@]}" 00:04:18.477 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:04:18.477 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:18.477 + for nvme in "${!nvme_files[@]}" 00:04:18.477 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:04:18.477 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:18.477 + for nvme in "${!nvme_files[@]}" 00:04:18.477 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:04:18.477 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:18.477 + for nvme in "${!nvme_files[@]}" 00:04:18.477 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:04:18.477 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:18.477 + for nvme in "${!nvme_files[@]}" 00:04:18.477 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:04:18.477 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:18.477 + for nvme in "${!nvme_files[@]}" 00:04:18.477 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:04:18.736 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:18.736 + for nvme in "${!nvme_files[@]}" 00:04:18.736 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:04:18.736 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:18.736 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:04:18.736 + echo 'End stage prepare_nvme.sh' 00:04:18.736 End stage prepare_nvme.sh 00:04:18.748 [Pipeline] sh 00:04:19.030 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:19.030 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:04:19.030 00:04:19.030 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:04:19.030 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:04:19.030 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:04:19.030 HELP=0 00:04:19.030 DRY_RUN=0 00:04:19.030 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:04:19.030 NVME_DISKS_TYPE=nvme,nvme, 00:04:19.030 NVME_AUTO_CREATE=0 00:04:19.030 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:04:19.030 NVME_CMB=,, 00:04:19.030 NVME_PMR=,, 00:04:19.030 NVME_ZNS=,, 00:04:19.030 NVME_MS=,, 00:04:19.030 NVME_FDP=,, 00:04:19.030 SPDK_VAGRANT_DISTRO=fedora39 00:04:19.030 SPDK_VAGRANT_VMCPU=10 00:04:19.030 SPDK_VAGRANT_VMRAM=12288 00:04:19.030 SPDK_VAGRANT_PROVIDER=libvirt 00:04:19.030 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:19.030 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:19.030 SPDK_OPENSTACK_NETWORK=0 00:04:19.030 VAGRANT_PACKAGE_BOX=0 00:04:19.030 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:04:19.030 FORCE_DISTRO=true 00:04:19.030 VAGRANT_BOX_VERSION= 00:04:19.030 EXTRA_VAGRANTFILES= 00:04:19.030 NIC_MODEL=e1000 00:04:19.030 00:04:19.030 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt' 00:04:19.030 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:04:22.315 Bringing machine 'default' up with 'libvirt' provider... 00:04:23.251 ==> default: Creating image (snapshot of base box volume). 00:04:23.509 ==> default: Creating domain with the following settings... 00:04:23.509 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733489891_0ce9ef60fbc46442394a 00:04:23.509 ==> default: -- Domain type: kvm 00:04:23.509 ==> default: -- Cpus: 10 00:04:23.509 ==> default: -- Feature: acpi 00:04:23.509 ==> default: -- Feature: apic 00:04:23.509 ==> default: -- Feature: pae 00:04:23.509 ==> default: -- Memory: 12288M 00:04:23.509 ==> default: -- Memory Backing: hugepages: 00:04:23.509 ==> default: -- Management MAC: 00:04:23.509 ==> default: -- Loader: 00:04:23.509 ==> default: -- Nvram: 00:04:23.509 ==> default: -- Base box: spdk/fedora39 00:04:23.509 ==> default: -- Storage pool: default 00:04:23.509 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733489891_0ce9ef60fbc46442394a.img (20G) 00:04:23.509 ==> default: -- Volume Cache: default 00:04:23.509 ==> default: -- Kernel: 00:04:23.509 ==> default: -- Initrd: 00:04:23.509 ==> default: -- Graphics Type: vnc 00:04:23.509 ==> default: -- Graphics Port: -1 00:04:23.509 ==> default: -- Graphics IP: 127.0.0.1 00:04:23.509 ==> default: -- Graphics Password: Not defined 00:04:23.509 ==> default: -- Video Type: cirrus 00:04:23.509 ==> default: -- Video VRAM: 9216 00:04:23.509 ==> default: -- Sound Type: 00:04:23.509 ==> default: -- Keymap: en-us 00:04:23.509 ==> default: -- TPM Path: 00:04:23.509 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:23.510 ==> default: -- Command line args: 00:04:23.510 ==> default: -> value=-device, 00:04:23.510 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:23.510 ==> default: -> value=-drive, 00:04:23.510 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:04:23.510 ==> default: -> value=-device, 00:04:23.510 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:23.510 ==> default: -> value=-device, 00:04:23.510 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:23.510 ==> default: -> value=-drive, 00:04:23.510 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:23.510 ==> default: -> value=-device, 00:04:23.510 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:23.510 ==> default: -> value=-drive, 00:04:23.510 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:23.510 ==> default: -> value=-device, 00:04:23.510 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:23.510 ==> default: -> value=-drive, 00:04:23.510 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:23.510 ==> default: -> value=-device, 00:04:23.510 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:23.510 ==> default: Creating shared folders metadata... 00:04:23.510 ==> default: Starting domain. 00:04:25.414 ==> default: Waiting for domain to get an IP address... 00:04:43.554 ==> default: Waiting for SSH to become available... 00:04:43.554 ==> default: Configuring and enabling network interfaces... 00:04:47.745 default: SSH address: 192.168.121.104:22 00:04:47.745 default: SSH username: vagrant 00:04:47.745 default: SSH auth method: private key 00:04:49.646 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:57.808 ==> default: Mounting SSHFS shared folder... 00:04:59.186 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:59.186 ==> default: Checking Mount.. 00:05:00.564 ==> default: Folder Successfully Mounted! 00:05:00.564 ==> default: Running provisioner: file... 00:05:01.501 default: ~/.gitconfig => .gitconfig 00:05:01.760 00:05:01.760 SUCCESS! 00:05:01.760 00:05:01.760 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:05:01.760 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:01.760 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:05:01.760 00:05:01.768 [Pipeline] } 00:05:01.782 [Pipeline] // stage 00:05:01.791 [Pipeline] dir 00:05:01.792 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt 00:05:01.793 [Pipeline] { 00:05:01.805 [Pipeline] catchError 00:05:01.807 [Pipeline] { 00:05:01.819 [Pipeline] sh 00:05:02.098 + vagrant ssh-config --host vagrant 00:05:02.098 + sed -ne /^Host/,$p 00:05:02.098 + tee ssh_conf 00:05:05.412 Host vagrant 00:05:05.412 HostName 192.168.121.104 00:05:05.412 User vagrant 00:05:05.412 Port 22 00:05:05.412 UserKnownHostsFile /dev/null 00:05:05.412 StrictHostKeyChecking no 00:05:05.412 PasswordAuthentication no 00:05:05.412 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:05.412 IdentitiesOnly yes 00:05:05.412 LogLevel FATAL 00:05:05.412 ForwardAgent yes 00:05:05.412 ForwardX11 yes 00:05:05.412 00:05:05.425 [Pipeline] withEnv 00:05:05.427 [Pipeline] { 00:05:05.442 [Pipeline] sh 00:05:05.721 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:05.721 source /etc/os-release 00:05:05.721 [[ -e /image.version ]] && img=$(< /image.version) 00:05:05.721 # Minimal, systemd-like check. 00:05:05.721 if [[ -e /.dockerenv ]]; then 00:05:05.721 # Clear garbage from the node's name: 00:05:05.721 # agt-er_autotest_547-896 -> autotest_547-896 00:05:05.721 # $HOSTNAME is the actual container id 00:05:05.721 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:05.721 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:05.721 # We can assume this is a mount from a host where container is running, 00:05:05.722 # so fetch its hostname to easily identify the target swarm worker. 00:05:05.722 container="$(< /etc/hostname) ($agent)" 00:05:05.722 else 00:05:05.722 # Fallback 00:05:05.722 container=$agent 00:05:05.722 fi 00:05:05.722 fi 00:05:05.722 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:05.722 00:05:05.991 [Pipeline] } 00:05:06.008 [Pipeline] // withEnv 00:05:06.019 [Pipeline] setCustomBuildProperty 00:05:06.035 [Pipeline] stage 00:05:06.038 [Pipeline] { (Tests) 00:05:06.056 [Pipeline] sh 00:05:06.337 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:06.608 [Pipeline] sh 00:05:06.885 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:07.157 [Pipeline] timeout 00:05:07.157 Timeout set to expire in 1 hr 0 min 00:05:07.159 [Pipeline] { 00:05:07.174 [Pipeline] sh 00:05:07.456 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:08.021 HEAD is now at 6696ebaae util: keep track of nested child fd_groups 00:05:08.032 [Pipeline] sh 00:05:08.310 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:08.651 [Pipeline] sh 00:05:08.927 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:09.199 [Pipeline] sh 00:05:09.478 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:05:09.737 ++ readlink -f spdk_repo 00:05:09.737 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:09.737 + [[ -n /home/vagrant/spdk_repo ]] 00:05:09.737 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:09.737 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:09.737 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:09.737 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:09.737 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:09.737 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:05:09.737 + cd /home/vagrant/spdk_repo 00:05:09.737 + source /etc/os-release 00:05:09.737 ++ NAME='Fedora Linux' 00:05:09.737 ++ VERSION='39 (Cloud Edition)' 00:05:09.737 ++ ID=fedora 00:05:09.737 ++ VERSION_ID=39 00:05:09.737 ++ VERSION_CODENAME= 00:05:09.737 ++ PLATFORM_ID=platform:f39 00:05:09.737 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:09.737 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:09.737 ++ LOGO=fedora-logo-icon 00:05:09.737 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:09.737 ++ HOME_URL=https://fedoraproject.org/ 00:05:09.737 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:09.737 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:09.737 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:09.737 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:09.737 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:09.737 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:09.737 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:09.737 ++ SUPPORT_END=2024-11-12 00:05:09.737 ++ VARIANT='Cloud Edition' 00:05:09.737 ++ VARIANT_ID=cloud 00:05:09.737 + uname -a 00:05:09.737 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:09.737 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:10.303 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.303 Hugepages 00:05:10.303 node hugesize free / total 00:05:10.303 node0 1048576kB 0 / 0 00:05:10.303 node0 2048kB 0 / 0 00:05:10.303 00:05:10.303 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:10.303 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:10.303 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:10.303 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:10.303 + rm -f /tmp/spdk-ld-path 00:05:10.303 + source autorun-spdk.conf 00:05:10.303 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:10.303 ++ SPDK_TEST_NVMF=1 00:05:10.303 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:10.303 ++ SPDK_TEST_USDT=1 00:05:10.303 ++ SPDK_TEST_NVMF_MDNS=1 00:05:10.303 ++ SPDK_RUN_UBSAN=1 00:05:10.303 ++ NET_TYPE=virt 00:05:10.303 ++ SPDK_JSONRPC_GO_CLIENT=1 00:05:10.303 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:10.303 ++ RUN_NIGHTLY=0 00:05:10.303 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:10.303 + [[ -n '' ]] 00:05:10.303 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:10.303 + for M in /var/spdk/build-*-manifest.txt 00:05:10.303 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:10.303 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:10.303 + for M in /var/spdk/build-*-manifest.txt 00:05:10.303 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:10.303 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:10.303 + for M in /var/spdk/build-*-manifest.txt 00:05:10.303 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:10.303 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:10.303 ++ uname 00:05:10.303 + [[ Linux == \L\i\n\u\x ]] 00:05:10.303 + sudo dmesg -T 00:05:10.303 + sudo dmesg --clear 00:05:10.564 + dmesg_pid=5255 00:05:10.564 + [[ Fedora Linux == FreeBSD ]] 00:05:10.564 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:10.564 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:10.564 + sudo dmesg -Tw 00:05:10.564 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:10.564 + [[ -x /usr/src/fio-static/fio ]] 00:05:10.564 + export FIO_BIN=/usr/src/fio-static/fio 00:05:10.564 + FIO_BIN=/usr/src/fio-static/fio 00:05:10.564 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:10.564 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:10.564 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:10.564 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:10.564 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:10.564 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:10.564 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:10.564 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:10.564 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:10.564 12:58:59 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:10.564 12:58:59 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:10.564 12:58:59 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:10.564 12:58:59 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:05:10.564 12:58:59 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:10.564 12:58:59 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:05:10.564 12:58:59 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:05:10.564 12:58:59 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:05:10.564 12:58:59 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:05:10.564 12:58:59 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:05:10.564 12:58:59 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:10.564 12:58:59 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:05:10.564 12:58:59 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:10.564 12:58:59 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:10.564 12:58:59 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:10.564 12:58:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:10.564 12:58:59 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:10.564 12:58:59 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:10.564 12:58:59 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.564 12:58:59 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.564 12:58:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.564 12:58:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.564 12:58:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.564 12:58:59 -- paths/export.sh@5 -- $ export PATH 00:05:10.564 12:58:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.564 12:58:59 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:10.564 12:58:59 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:10.564 12:58:59 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733489939.XXXXXX 00:05:10.564 12:58:59 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733489939.K5EC5W 00:05:10.564 12:58:59 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:10.564 12:58:59 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:10.564 12:58:59 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:10.564 12:58:59 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:10.564 12:58:59 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:10.564 12:58:59 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:10.564 12:58:59 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:10.564 12:58:59 -- common/autotest_common.sh@10 -- $ set +x 00:05:10.564 12:58:59 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:05:10.564 12:58:59 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:10.564 12:58:59 -- pm/common@17 -- $ local monitor 00:05:10.564 12:58:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.564 12:58:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.564 12:58:59 -- pm/common@25 -- $ sleep 1 00:05:10.564 12:58:59 -- pm/common@21 -- $ date +%s 00:05:10.564 12:58:59 -- pm/common@21 -- $ date +%s 00:05:10.564 12:58:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733489939 00:05:10.564 12:58:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733489939 00:05:10.564 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733489939_collect-cpu-load.pm.log 00:05:10.564 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733489939_collect-vmstat.pm.log 00:05:11.499 12:59:00 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:11.499 12:59:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:11.500 12:59:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:11.500 12:59:00 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:11.500 12:59:00 -- spdk/autobuild.sh@16 -- $ date -u 00:05:11.500 Fri Dec 6 12:59:00 PM UTC 2024 00:05:11.500 12:59:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:11.758 v25.01-pre-304-g6696ebaae 00:05:11.758 12:59:00 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:11.758 12:59:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:11.758 12:59:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:11.758 12:59:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:11.758 12:59:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:11.758 12:59:00 -- common/autotest_common.sh@10 -- $ set +x 00:05:11.758 ************************************ 00:05:11.758 START TEST ubsan 00:05:11.758 ************************************ 00:05:11.758 using ubsan 00:05:11.758 12:59:00 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:11.758 00:05:11.758 real 0m0.000s 00:05:11.758 user 0m0.000s 00:05:11.758 sys 0m0.000s 00:05:11.758 12:59:00 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:11.758 12:59:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:11.758 ************************************ 00:05:11.758 END TEST ubsan 00:05:11.758 ************************************ 00:05:11.758 12:59:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:11.758 12:59:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:11.758 12:59:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:11.758 12:59:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:11.758 12:59:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:11.758 12:59:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:11.758 12:59:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:11.758 12:59:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:11.758 12:59:00 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:05:11.758 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:11.758 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:12.326 Using 'verbs' RDMA provider 00:05:28.202 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:40.418 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:40.418 go version go1.21.1 linux/amd64 00:05:40.418 Creating mk/config.mk...done. 00:05:40.418 Creating mk/cc.flags.mk...done. 00:05:40.418 Type 'make' to build. 00:05:40.418 12:59:28 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:40.418 12:59:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:40.418 12:59:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:40.419 12:59:28 -- common/autotest_common.sh@10 -- $ set +x 00:05:40.419 ************************************ 00:05:40.419 START TEST make 00:05:40.419 ************************************ 00:05:40.419 12:59:28 make -- common/autotest_common.sh@1129 -- $ make -j10 00:05:40.676 make[1]: Nothing to be done for 'all'. 00:05:55.548 The Meson build system 00:05:55.548 Version: 1.5.0 00:05:55.548 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:55.548 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:55.548 Build type: native build 00:05:55.548 Program cat found: YES (/usr/bin/cat) 00:05:55.548 Project name: DPDK 00:05:55.548 Project version: 24.03.0 00:05:55.548 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:55.548 C linker for the host machine: cc ld.bfd 2.40-14 00:05:55.548 Host machine cpu family: x86_64 00:05:55.548 Host machine cpu: x86_64 00:05:55.548 Message: ## Building in Developer Mode ## 00:05:55.548 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:55.548 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:55.548 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:55.548 Program python3 found: YES (/usr/bin/python3) 00:05:55.548 Program cat found: YES (/usr/bin/cat) 00:05:55.548 Compiler for C supports arguments -march=native: YES 00:05:55.548 Checking for size of "void *" : 8 00:05:55.548 Checking for size of "void *" : 8 (cached) 00:05:55.548 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:55.548 Library m found: YES 00:05:55.548 Library numa found: YES 00:05:55.548 Has header "numaif.h" : YES 00:05:55.548 Library fdt found: NO 00:05:55.548 Library execinfo found: NO 00:05:55.548 Has header "execinfo.h" : YES 00:05:55.548 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:55.548 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:55.548 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:55.548 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:55.548 Run-time dependency openssl found: YES 3.1.1 00:05:55.548 Run-time dependency libpcap found: YES 1.10.4 00:05:55.548 Has header "pcap.h" with dependency libpcap: YES 00:05:55.548 Compiler for C supports arguments -Wcast-qual: YES 00:05:55.548 Compiler for C supports arguments -Wdeprecated: YES 00:05:55.548 Compiler for C supports arguments -Wformat: YES 00:05:55.548 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:55.548 Compiler for C supports arguments -Wformat-security: NO 00:05:55.548 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:55.548 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:55.548 Compiler for C supports arguments -Wnested-externs: YES 00:05:55.548 Compiler for C supports arguments -Wold-style-definition: YES 00:05:55.548 Compiler for C supports arguments -Wpointer-arith: YES 00:05:55.548 Compiler for C supports arguments -Wsign-compare: YES 00:05:55.548 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:55.548 Compiler for C supports arguments -Wundef: YES 00:05:55.548 Compiler for C supports arguments -Wwrite-strings: YES 00:05:55.548 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:55.548 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:55.548 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:55.548 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:55.548 Program objdump found: YES (/usr/bin/objdump) 00:05:55.548 Compiler for C supports arguments -mavx512f: YES 00:05:55.548 Checking if "AVX512 checking" compiles: YES 00:05:55.548 Fetching value of define "__SSE4_2__" : 1 00:05:55.548 Fetching value of define "__AES__" : 1 00:05:55.548 Fetching value of define "__AVX__" : 1 00:05:55.548 Fetching value of define "__AVX2__" : 1 00:05:55.548 Fetching value of define "__AVX512BW__" : (undefined) 00:05:55.548 Fetching value of define "__AVX512CD__" : (undefined) 00:05:55.548 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:55.548 Fetching value of define "__AVX512F__" : (undefined) 00:05:55.548 Fetching value of define "__AVX512VL__" : (undefined) 00:05:55.548 Fetching value of define "__PCLMUL__" : 1 00:05:55.548 Fetching value of define "__RDRND__" : 1 00:05:55.548 Fetching value of define "__RDSEED__" : 1 00:05:55.548 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:55.548 Fetching value of define "__znver1__" : (undefined) 00:05:55.548 Fetching value of define "__znver2__" : (undefined) 00:05:55.548 Fetching value of define "__znver3__" : (undefined) 00:05:55.548 Fetching value of define "__znver4__" : (undefined) 00:05:55.548 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:55.548 Message: lib/log: Defining dependency "log" 00:05:55.548 Message: lib/kvargs: Defining dependency "kvargs" 00:05:55.548 Message: lib/telemetry: Defining dependency "telemetry" 00:05:55.548 Checking for function "getentropy" : NO 00:05:55.548 Message: lib/eal: Defining dependency "eal" 00:05:55.548 Message: lib/ring: Defining dependency "ring" 00:05:55.548 Message: lib/rcu: Defining dependency "rcu" 00:05:55.548 Message: lib/mempool: Defining dependency "mempool" 00:05:55.548 Message: lib/mbuf: Defining dependency "mbuf" 00:05:55.548 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:55.548 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:55.548 Compiler for C supports arguments -mpclmul: YES 00:05:55.548 Compiler for C supports arguments -maes: YES 00:05:55.548 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:55.548 Compiler for C supports arguments -mavx512bw: YES 00:05:55.548 Compiler for C supports arguments -mavx512dq: YES 00:05:55.548 Compiler for C supports arguments -mavx512vl: YES 00:05:55.548 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:55.548 Compiler for C supports arguments -mavx2: YES 00:05:55.548 Compiler for C supports arguments -mavx: YES 00:05:55.548 Message: lib/net: Defining dependency "net" 00:05:55.548 Message: lib/meter: Defining dependency "meter" 00:05:55.548 Message: lib/ethdev: Defining dependency "ethdev" 00:05:55.548 Message: lib/pci: Defining dependency "pci" 00:05:55.548 Message: lib/cmdline: Defining dependency "cmdline" 00:05:55.548 Message: lib/hash: Defining dependency "hash" 00:05:55.548 Message: lib/timer: Defining dependency "timer" 00:05:55.548 Message: lib/compressdev: Defining dependency "compressdev" 00:05:55.548 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:55.548 Message: lib/dmadev: Defining dependency "dmadev" 00:05:55.548 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:55.548 Message: lib/power: Defining dependency "power" 00:05:55.548 Message: lib/reorder: Defining dependency "reorder" 00:05:55.548 Message: lib/security: Defining dependency "security" 00:05:55.548 Has header "linux/userfaultfd.h" : YES 00:05:55.548 Has header "linux/vduse.h" : YES 00:05:55.548 Message: lib/vhost: Defining dependency "vhost" 00:05:55.548 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:55.548 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:55.548 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:55.548 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:55.548 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:55.549 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:55.549 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:55.549 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:55.549 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:55.549 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:55.549 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:55.549 Configuring doxy-api-html.conf using configuration 00:05:55.549 Configuring doxy-api-man.conf using configuration 00:05:55.549 Program mandb found: YES (/usr/bin/mandb) 00:05:55.549 Program sphinx-build found: NO 00:05:55.549 Configuring rte_build_config.h using configuration 00:05:55.549 Message: 00:05:55.549 ================= 00:05:55.549 Applications Enabled 00:05:55.549 ================= 00:05:55.549 00:05:55.549 apps: 00:05:55.549 00:05:55.549 00:05:55.549 Message: 00:05:55.549 ================= 00:05:55.549 Libraries Enabled 00:05:55.549 ================= 00:05:55.549 00:05:55.549 libs: 00:05:55.549 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:55.549 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:55.549 cryptodev, dmadev, power, reorder, security, vhost, 00:05:55.549 00:05:55.549 Message: 00:05:55.549 =============== 00:05:55.549 Drivers Enabled 00:05:55.549 =============== 00:05:55.549 00:05:55.549 common: 00:05:55.549 00:05:55.549 bus: 00:05:55.549 pci, vdev, 00:05:55.549 mempool: 00:05:55.549 ring, 00:05:55.549 dma: 00:05:55.549 00:05:55.549 net: 00:05:55.549 00:05:55.549 crypto: 00:05:55.549 00:05:55.549 compress: 00:05:55.549 00:05:55.549 vdpa: 00:05:55.549 00:05:55.549 00:05:55.549 Message: 00:05:55.549 ================= 00:05:55.549 Content Skipped 00:05:55.549 ================= 00:05:55.549 00:05:55.549 apps: 00:05:55.549 dumpcap: explicitly disabled via build config 00:05:55.549 graph: explicitly disabled via build config 00:05:55.549 pdump: explicitly disabled via build config 00:05:55.549 proc-info: explicitly disabled via build config 00:05:55.549 test-acl: explicitly disabled via build config 00:05:55.549 test-bbdev: explicitly disabled via build config 00:05:55.549 test-cmdline: explicitly disabled via build config 00:05:55.549 test-compress-perf: explicitly disabled via build config 00:05:55.549 test-crypto-perf: explicitly disabled via build config 00:05:55.549 test-dma-perf: explicitly disabled via build config 00:05:55.549 test-eventdev: explicitly disabled via build config 00:05:55.549 test-fib: explicitly disabled via build config 00:05:55.549 test-flow-perf: explicitly disabled via build config 00:05:55.549 test-gpudev: explicitly disabled via build config 00:05:55.549 test-mldev: explicitly disabled via build config 00:05:55.549 test-pipeline: explicitly disabled via build config 00:05:55.549 test-pmd: explicitly disabled via build config 00:05:55.549 test-regex: explicitly disabled via build config 00:05:55.549 test-sad: explicitly disabled via build config 00:05:55.549 test-security-perf: explicitly disabled via build config 00:05:55.549 00:05:55.549 libs: 00:05:55.549 argparse: explicitly disabled via build config 00:05:55.549 metrics: explicitly disabled via build config 00:05:55.549 acl: explicitly disabled via build config 00:05:55.549 bbdev: explicitly disabled via build config 00:05:55.549 bitratestats: explicitly disabled via build config 00:05:55.549 bpf: explicitly disabled via build config 00:05:55.549 cfgfile: explicitly disabled via build config 00:05:55.549 distributor: explicitly disabled via build config 00:05:55.549 efd: explicitly disabled via build config 00:05:55.549 eventdev: explicitly disabled via build config 00:05:55.549 dispatcher: explicitly disabled via build config 00:05:55.549 gpudev: explicitly disabled via build config 00:05:55.549 gro: explicitly disabled via build config 00:05:55.549 gso: explicitly disabled via build config 00:05:55.549 ip_frag: explicitly disabled via build config 00:05:55.549 jobstats: explicitly disabled via build config 00:05:55.549 latencystats: explicitly disabled via build config 00:05:55.549 lpm: explicitly disabled via build config 00:05:55.549 member: explicitly disabled via build config 00:05:55.549 pcapng: explicitly disabled via build config 00:05:55.549 rawdev: explicitly disabled via build config 00:05:55.549 regexdev: explicitly disabled via build config 00:05:55.549 mldev: explicitly disabled via build config 00:05:55.549 rib: explicitly disabled via build config 00:05:55.549 sched: explicitly disabled via build config 00:05:55.549 stack: explicitly disabled via build config 00:05:55.549 ipsec: explicitly disabled via build config 00:05:55.549 pdcp: explicitly disabled via build config 00:05:55.549 fib: explicitly disabled via build config 00:05:55.549 port: explicitly disabled via build config 00:05:55.549 pdump: explicitly disabled via build config 00:05:55.549 table: explicitly disabled via build config 00:05:55.549 pipeline: explicitly disabled via build config 00:05:55.549 graph: explicitly disabled via build config 00:05:55.549 node: explicitly disabled via build config 00:05:55.549 00:05:55.549 drivers: 00:05:55.549 common/cpt: not in enabled drivers build config 00:05:55.549 common/dpaax: not in enabled drivers build config 00:05:55.549 common/iavf: not in enabled drivers build config 00:05:55.549 common/idpf: not in enabled drivers build config 00:05:55.549 common/ionic: not in enabled drivers build config 00:05:55.549 common/mvep: not in enabled drivers build config 00:05:55.549 common/octeontx: not in enabled drivers build config 00:05:55.549 bus/auxiliary: not in enabled drivers build config 00:05:55.549 bus/cdx: not in enabled drivers build config 00:05:55.549 bus/dpaa: not in enabled drivers build config 00:05:55.549 bus/fslmc: not in enabled drivers build config 00:05:55.549 bus/ifpga: not in enabled drivers build config 00:05:55.549 bus/platform: not in enabled drivers build config 00:05:55.549 bus/uacce: not in enabled drivers build config 00:05:55.549 bus/vmbus: not in enabled drivers build config 00:05:55.549 common/cnxk: not in enabled drivers build config 00:05:55.549 common/mlx5: not in enabled drivers build config 00:05:55.549 common/nfp: not in enabled drivers build config 00:05:55.549 common/nitrox: not in enabled drivers build config 00:05:55.549 common/qat: not in enabled drivers build config 00:05:55.549 common/sfc_efx: not in enabled drivers build config 00:05:55.549 mempool/bucket: not in enabled drivers build config 00:05:55.549 mempool/cnxk: not in enabled drivers build config 00:05:55.549 mempool/dpaa: not in enabled drivers build config 00:05:55.549 mempool/dpaa2: not in enabled drivers build config 00:05:55.549 mempool/octeontx: not in enabled drivers build config 00:05:55.549 mempool/stack: not in enabled drivers build config 00:05:55.549 dma/cnxk: not in enabled drivers build config 00:05:55.549 dma/dpaa: not in enabled drivers build config 00:05:55.549 dma/dpaa2: not in enabled drivers build config 00:05:55.549 dma/hisilicon: not in enabled drivers build config 00:05:55.550 dma/idxd: not in enabled drivers build config 00:05:55.550 dma/ioat: not in enabled drivers build config 00:05:55.550 dma/skeleton: not in enabled drivers build config 00:05:55.550 net/af_packet: not in enabled drivers build config 00:05:55.550 net/af_xdp: not in enabled drivers build config 00:05:55.550 net/ark: not in enabled drivers build config 00:05:55.550 net/atlantic: not in enabled drivers build config 00:05:55.550 net/avp: not in enabled drivers build config 00:05:55.550 net/axgbe: not in enabled drivers build config 00:05:55.550 net/bnx2x: not in enabled drivers build config 00:05:55.550 net/bnxt: not in enabled drivers build config 00:05:55.550 net/bonding: not in enabled drivers build config 00:05:55.550 net/cnxk: not in enabled drivers build config 00:05:55.550 net/cpfl: not in enabled drivers build config 00:05:55.550 net/cxgbe: not in enabled drivers build config 00:05:55.550 net/dpaa: not in enabled drivers build config 00:05:55.550 net/dpaa2: not in enabled drivers build config 00:05:55.550 net/e1000: not in enabled drivers build config 00:05:55.550 net/ena: not in enabled drivers build config 00:05:55.550 net/enetc: not in enabled drivers build config 00:05:55.550 net/enetfec: not in enabled drivers build config 00:05:55.550 net/enic: not in enabled drivers build config 00:05:55.550 net/failsafe: not in enabled drivers build config 00:05:55.550 net/fm10k: not in enabled drivers build config 00:05:55.550 net/gve: not in enabled drivers build config 00:05:55.550 net/hinic: not in enabled drivers build config 00:05:55.550 net/hns3: not in enabled drivers build config 00:05:55.550 net/i40e: not in enabled drivers build config 00:05:55.550 net/iavf: not in enabled drivers build config 00:05:55.550 net/ice: not in enabled drivers build config 00:05:55.550 net/idpf: not in enabled drivers build config 00:05:55.550 net/igc: not in enabled drivers build config 00:05:55.550 net/ionic: not in enabled drivers build config 00:05:55.550 net/ipn3ke: not in enabled drivers build config 00:05:55.550 net/ixgbe: not in enabled drivers build config 00:05:55.550 net/mana: not in enabled drivers build config 00:05:55.550 net/memif: not in enabled drivers build config 00:05:55.550 net/mlx4: not in enabled drivers build config 00:05:55.550 net/mlx5: not in enabled drivers build config 00:05:55.550 net/mvneta: not in enabled drivers build config 00:05:55.550 net/mvpp2: not in enabled drivers build config 00:05:55.550 net/netvsc: not in enabled drivers build config 00:05:55.550 net/nfb: not in enabled drivers build config 00:05:55.550 net/nfp: not in enabled drivers build config 00:05:55.550 net/ngbe: not in enabled drivers build config 00:05:55.550 net/null: not in enabled drivers build config 00:05:55.550 net/octeontx: not in enabled drivers build config 00:05:55.550 net/octeon_ep: not in enabled drivers build config 00:05:55.550 net/pcap: not in enabled drivers build config 00:05:55.550 net/pfe: not in enabled drivers build config 00:05:55.550 net/qede: not in enabled drivers build config 00:05:55.550 net/ring: not in enabled drivers build config 00:05:55.550 net/sfc: not in enabled drivers build config 00:05:55.550 net/softnic: not in enabled drivers build config 00:05:55.550 net/tap: not in enabled drivers build config 00:05:55.550 net/thunderx: not in enabled drivers build config 00:05:55.550 net/txgbe: not in enabled drivers build config 00:05:55.550 net/vdev_netvsc: not in enabled drivers build config 00:05:55.550 net/vhost: not in enabled drivers build config 00:05:55.550 net/virtio: not in enabled drivers build config 00:05:55.550 net/vmxnet3: not in enabled drivers build config 00:05:55.550 raw/*: missing internal dependency, "rawdev" 00:05:55.550 crypto/armv8: not in enabled drivers build config 00:05:55.550 crypto/bcmfs: not in enabled drivers build config 00:05:55.550 crypto/caam_jr: not in enabled drivers build config 00:05:55.550 crypto/ccp: not in enabled drivers build config 00:05:55.550 crypto/cnxk: not in enabled drivers build config 00:05:55.550 crypto/dpaa_sec: not in enabled drivers build config 00:05:55.550 crypto/dpaa2_sec: not in enabled drivers build config 00:05:55.550 crypto/ipsec_mb: not in enabled drivers build config 00:05:55.550 crypto/mlx5: not in enabled drivers build config 00:05:55.550 crypto/mvsam: not in enabled drivers build config 00:05:55.550 crypto/nitrox: not in enabled drivers build config 00:05:55.550 crypto/null: not in enabled drivers build config 00:05:55.550 crypto/octeontx: not in enabled drivers build config 00:05:55.550 crypto/openssl: not in enabled drivers build config 00:05:55.550 crypto/scheduler: not in enabled drivers build config 00:05:55.550 crypto/uadk: not in enabled drivers build config 00:05:55.550 crypto/virtio: not in enabled drivers build config 00:05:55.550 compress/isal: not in enabled drivers build config 00:05:55.550 compress/mlx5: not in enabled drivers build config 00:05:55.550 compress/nitrox: not in enabled drivers build config 00:05:55.550 compress/octeontx: not in enabled drivers build config 00:05:55.550 compress/zlib: not in enabled drivers build config 00:05:55.550 regex/*: missing internal dependency, "regexdev" 00:05:55.550 ml/*: missing internal dependency, "mldev" 00:05:55.550 vdpa/ifc: not in enabled drivers build config 00:05:55.550 vdpa/mlx5: not in enabled drivers build config 00:05:55.550 vdpa/nfp: not in enabled drivers build config 00:05:55.550 vdpa/sfc: not in enabled drivers build config 00:05:55.550 event/*: missing internal dependency, "eventdev" 00:05:55.550 baseband/*: missing internal dependency, "bbdev" 00:05:55.550 gpu/*: missing internal dependency, "gpudev" 00:05:55.550 00:05:55.550 00:05:55.550 Build targets in project: 85 00:05:55.550 00:05:55.550 DPDK 24.03.0 00:05:55.550 00:05:55.550 User defined options 00:05:55.550 buildtype : debug 00:05:55.550 default_library : shared 00:05:55.550 libdir : lib 00:05:55.550 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:55.550 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:55.550 c_link_args : 00:05:55.550 cpu_instruction_set: native 00:05:55.550 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:55.550 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:55.550 enable_docs : false 00:05:55.550 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:55.550 enable_kmods : false 00:05:55.550 max_lcores : 128 00:05:55.550 tests : false 00:05:55.550 00:05:55.550 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:55.550 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:55.550 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:55.550 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:55.550 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:55.550 [4/268] Linking static target lib/librte_kvargs.a 00:05:55.550 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:55.550 [6/268] Linking static target lib/librte_log.a 00:05:56.118 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.118 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:56.118 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:56.376 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:56.376 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:56.687 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:56.687 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:56.687 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:56.687 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:56.687 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:56.687 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:56.687 [18/268] Linking static target lib/librte_telemetry.a 00:05:56.687 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.687 [20/268] Linking target lib/librte_log.so.24.1 00:05:56.945 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:57.204 [22/268] Linking target lib/librte_kvargs.so.24.1 00:05:57.204 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:57.204 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:57.463 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:57.463 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:57.463 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:57.463 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:57.464 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.464 [30/268] Linking target lib/librte_telemetry.so.24.1 00:05:57.722 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:57.722 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:57.722 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:57.722 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:57.722 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:57.722 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:57.981 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:58.239 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:58.239 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:58.239 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:58.498 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:58.498 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:58.498 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:58.498 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:58.498 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:58.498 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:58.757 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:58.757 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:58.757 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:58.757 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:59.015 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:59.274 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:59.274 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:59.274 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:59.533 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:59.533 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:59.533 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:59.533 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:59.791 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:59.791 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:59.791 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:00.358 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:00.358 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:00.358 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:00.358 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:00.358 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:00.617 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:00.617 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:00.617 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:00.617 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:00.875 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:00.875 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:00.875 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:00.875 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:00.875 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:01.133 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:01.133 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:01.133 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:01.133 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:01.391 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:01.391 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:01.391 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:01.650 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:01.650 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:01.650 [85/268] Linking static target lib/librte_ring.a 00:06:01.650 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:01.650 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:01.650 [88/268] Linking static target lib/librte_rcu.a 00:06:01.650 [89/268] Linking static target lib/librte_eal.a 00:06:01.908 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:01.908 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:02.166 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.166 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:02.166 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:02.166 [95/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.166 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:02.166 [97/268] Linking static target lib/librte_mempool.a 00:06:02.425 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:02.425 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:02.684 [100/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:02.684 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:02.684 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:02.684 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:02.943 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:02.943 [105/268] Linking static target lib/librte_mbuf.a 00:06:02.943 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:02.943 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:02.943 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:02.943 [109/268] Linking static target lib/librte_net.a 00:06:03.201 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:03.201 [111/268] Linking static target lib/librte_meter.a 00:06:03.460 [112/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.460 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:03.460 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:03.460 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:03.460 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.460 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:03.719 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.977 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.977 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:04.236 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:04.494 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:04.494 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:04.494 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:04.752 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:04.752 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:04.752 [127/268] Linking static target lib/librte_pci.a 00:06:04.752 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:04.752 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:04.752 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:04.752 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:04.752 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:05.010 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:05.010 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:05.010 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:05.010 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:05.010 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:05.010 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:05.010 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:05.010 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:05.269 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:05.269 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:05.269 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:05.269 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:05.269 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:05.871 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:05.871 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:05.871 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:05.871 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:05.871 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:05.871 [151/268] Linking static target lib/librte_cmdline.a 00:06:06.128 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:06.129 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:06.129 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:06.129 [155/268] Linking static target lib/librte_hash.a 00:06:06.129 [156/268] Linking static target lib/librte_ethdev.a 00:06:06.386 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:06.386 [158/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:06.386 [159/268] Linking static target lib/librte_timer.a 00:06:06.642 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:06.642 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:06.643 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:06.643 [163/268] Linking static target lib/librte_compressdev.a 00:06:06.643 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:07.206 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.206 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:07.206 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:07.206 [168/268] Linking static target lib/librte_dmadev.a 00:06:07.206 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:07.206 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:07.206 [171/268] Linking static target lib/librte_cryptodev.a 00:06:07.462 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:07.462 [173/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.462 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:07.719 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.719 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.719 [177/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:07.719 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:07.976 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:07.976 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:08.235 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:08.235 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:08.235 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:08.235 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:08.492 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:08.492 [186/268] Linking static target lib/librte_power.a 00:06:08.750 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:09.008 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:09.008 [189/268] Linking static target lib/librte_security.a 00:06:09.008 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:09.008 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:09.008 [192/268] Linking static target lib/librte_reorder.a 00:06:09.266 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:09.525 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:09.525 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:09.783 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:09.784 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:09.784 [198/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:10.041 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:10.041 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:10.300 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:10.559 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:10.559 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:10.559 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:10.559 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:10.559 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:10.817 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:10.817 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:11.076 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:11.076 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:11.076 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:11.076 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:11.334 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:11.334 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:11.334 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:11.334 [216/268] Linking static target drivers/librte_bus_pci.a 00:06:11.334 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:11.334 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:11.334 [219/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:11.334 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:11.334 [221/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:11.334 [222/268] Linking static target drivers/librte_bus_vdev.a 00:06:11.592 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:11.592 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:11.592 [225/268] Linking static target drivers/librte_mempool_ring.a 00:06:11.592 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:11.592 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:11.850 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.416 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:12.416 [230/268] Linking static target lib/librte_vhost.a 00:06:13.348 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.348 [232/268] Linking target lib/librte_eal.so.24.1 00:06:13.348 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:13.348 [234/268] Linking target lib/librte_pci.so.24.1 00:06:13.606 [235/268] Linking target lib/librte_meter.so.24.1 00:06:13.606 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:13.606 [237/268] Linking target lib/librte_ring.so.24.1 00:06:13.606 [238/268] Linking target lib/librte_timer.so.24.1 00:06:13.606 [239/268] Linking target lib/librte_dmadev.so.24.1 00:06:13.606 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:13.606 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:13.606 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:13.606 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:13.606 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:13.606 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:13.606 [246/268] Linking target lib/librte_mempool.so.24.1 00:06:13.606 [247/268] Linking target lib/librte_rcu.so.24.1 00:06:13.606 [248/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.863 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:13.863 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:13.863 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:13.863 [252/268] Linking target lib/librte_mbuf.so.24.1 00:06:14.121 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:14.121 [254/268] Linking target lib/librte_net.so.24.1 00:06:14.121 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:06:14.121 [256/268] Linking target lib/librte_reorder.so.24.1 00:06:14.121 [257/268] Linking target lib/librte_compressdev.so.24.1 00:06:14.121 [258/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:14.121 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:14.121 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:14.379 [261/268] Linking target lib/librte_hash.so.24.1 00:06:14.379 [262/268] Linking target lib/librte_security.so.24.1 00:06:14.379 [263/268] Linking target lib/librte_cmdline.so.24.1 00:06:14.379 [264/268] Linking target lib/librte_ethdev.so.24.1 00:06:14.379 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:14.379 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:14.638 [267/268] Linking target lib/librte_power.so.24.1 00:06:14.638 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:14.638 INFO: autodetecting backend as ninja 00:06:14.638 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:46.747 CC lib/ut/ut.o 00:06:46.747 CC lib/ut_mock/mock.o 00:06:46.747 CC lib/log/log.o 00:06:46.747 CC lib/log/log_deprecated.o 00:06:46.747 CC lib/log/log_flags.o 00:06:46.747 LIB libspdk_ut_mock.a 00:06:46.747 SO libspdk_ut_mock.so.6.0 00:06:46.747 LIB libspdk_ut.a 00:06:46.747 SO libspdk_ut.so.2.0 00:06:46.747 SYMLINK libspdk_ut_mock.so 00:06:46.747 SYMLINK libspdk_ut.so 00:06:46.747 LIB libspdk_log.a 00:06:46.747 SO libspdk_log.so.7.1 00:06:46.747 SYMLINK libspdk_log.so 00:06:46.747 CC lib/ioat/ioat.o 00:06:46.747 CXX lib/trace_parser/trace.o 00:06:46.747 CC lib/util/base64.o 00:06:46.747 CC lib/util/bit_array.o 00:06:46.747 CC lib/dma/dma.o 00:06:46.747 CC lib/util/crc16.o 00:06:46.747 CC lib/util/cpuset.o 00:06:46.747 CC lib/util/crc32.o 00:06:46.747 CC lib/util/crc32c.o 00:06:46.747 CC lib/vfio_user/host/vfio_user_pci.o 00:06:46.747 CC lib/util/crc32_ieee.o 00:06:46.747 CC lib/util/crc64.o 00:06:46.747 CC lib/util/dif.o 00:06:46.747 CC lib/util/fd.o 00:06:46.747 CC lib/util/fd_group.o 00:06:46.747 LIB libspdk_dma.a 00:06:46.747 SO libspdk_dma.so.5.0 00:06:46.747 CC lib/util/file.o 00:06:46.747 CC lib/vfio_user/host/vfio_user.o 00:06:46.747 CC lib/util/hexlify.o 00:06:46.747 SYMLINK libspdk_dma.so 00:06:46.747 CC lib/util/iov.o 00:06:46.747 LIB libspdk_ioat.a 00:06:46.747 SO libspdk_ioat.so.7.0 00:06:46.747 CC lib/util/math.o 00:06:46.747 CC lib/util/net.o 00:06:46.747 SYMLINK libspdk_ioat.so 00:06:46.747 CC lib/util/pipe.o 00:06:46.747 CC lib/util/strerror_tls.o 00:06:46.747 CC lib/util/string.o 00:06:46.747 LIB libspdk_vfio_user.a 00:06:46.747 CC lib/util/uuid.o 00:06:46.747 CC lib/util/xor.o 00:06:46.747 CC lib/util/zipf.o 00:06:46.747 SO libspdk_vfio_user.so.5.0 00:06:46.747 CC lib/util/md5.o 00:06:46.747 SYMLINK libspdk_vfio_user.so 00:06:46.747 LIB libspdk_trace_parser.a 00:06:46.747 LIB libspdk_util.a 00:06:46.747 SO libspdk_trace_parser.so.6.0 00:06:46.747 SO libspdk_util.so.10.1 00:06:46.747 SYMLINK libspdk_trace_parser.so 00:06:46.747 SYMLINK libspdk_util.so 00:06:46.747 CC lib/json/json_parse.o 00:06:46.747 CC lib/json/json_util.o 00:06:46.747 CC lib/json/json_write.o 00:06:46.747 CC lib/vmd/vmd.o 00:06:46.747 CC lib/vmd/led.o 00:06:46.747 CC lib/idxd/idxd.o 00:06:46.747 CC lib/idxd/idxd_user.o 00:06:46.747 CC lib/conf/conf.o 00:06:46.747 CC lib/env_dpdk/env.o 00:06:46.747 CC lib/rdma_utils/rdma_utils.o 00:06:46.747 CC lib/env_dpdk/memory.o 00:06:46.747 LIB libspdk_conf.a 00:06:46.747 CC lib/idxd/idxd_kernel.o 00:06:46.747 SO libspdk_conf.so.6.0 00:06:46.747 SYMLINK libspdk_conf.so 00:06:46.747 CC lib/env_dpdk/pci.o 00:06:46.747 CC lib/env_dpdk/init.o 00:06:46.747 CC lib/env_dpdk/threads.o 00:06:46.747 LIB libspdk_json.a 00:06:46.747 LIB libspdk_rdma_utils.a 00:06:46.747 CC lib/env_dpdk/pci_ioat.o 00:06:46.747 SO libspdk_rdma_utils.so.1.0 00:06:46.747 SO libspdk_json.so.6.0 00:06:46.747 SYMLINK libspdk_rdma_utils.so 00:06:46.747 CC lib/env_dpdk/pci_virtio.o 00:06:46.747 SYMLINK libspdk_json.so 00:06:46.747 CC lib/env_dpdk/pci_vmd.o 00:06:46.747 CC lib/env_dpdk/pci_idxd.o 00:06:46.747 LIB libspdk_idxd.a 00:06:46.747 CC lib/rdma_provider/common.o 00:06:46.747 CC lib/env_dpdk/pci_event.o 00:06:46.747 CC lib/env_dpdk/sigbus_handler.o 00:06:46.747 SO libspdk_idxd.so.12.1 00:06:46.747 CC lib/env_dpdk/pci_dpdk.o 00:06:46.747 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:46.747 CC lib/jsonrpc/jsonrpc_server.o 00:06:46.747 SYMLINK libspdk_idxd.so 00:06:46.747 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:47.004 CC lib/jsonrpc/jsonrpc_client.o 00:06:47.004 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:47.004 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:47.004 LIB libspdk_vmd.a 00:06:47.004 SO libspdk_vmd.so.6.0 00:06:47.004 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:47.004 SYMLINK libspdk_vmd.so 00:06:47.262 LIB libspdk_jsonrpc.a 00:06:47.262 LIB libspdk_rdma_provider.a 00:06:47.262 SO libspdk_rdma_provider.so.7.0 00:06:47.262 SO libspdk_jsonrpc.so.6.0 00:06:47.520 SYMLINK libspdk_rdma_provider.so 00:06:47.520 SYMLINK libspdk_jsonrpc.so 00:06:47.520 LIB libspdk_env_dpdk.a 00:06:47.520 SO libspdk_env_dpdk.so.15.1 00:06:47.778 CC lib/rpc/rpc.o 00:06:47.778 SYMLINK libspdk_env_dpdk.so 00:06:48.036 LIB libspdk_rpc.a 00:06:48.036 SO libspdk_rpc.so.6.0 00:06:48.036 SYMLINK libspdk_rpc.so 00:06:48.295 CC lib/trace/trace.o 00:06:48.295 CC lib/notify/notify.o 00:06:48.295 CC lib/trace/trace_flags.o 00:06:48.295 CC lib/trace/trace_rpc.o 00:06:48.295 CC lib/keyring/keyring_rpc.o 00:06:48.295 CC lib/keyring/keyring.o 00:06:48.295 CC lib/notify/notify_rpc.o 00:06:48.554 LIB libspdk_notify.a 00:06:48.554 LIB libspdk_keyring.a 00:06:48.554 SO libspdk_notify.so.6.0 00:06:48.811 SO libspdk_keyring.so.2.0 00:06:48.811 SYMLINK libspdk_notify.so 00:06:48.811 LIB libspdk_trace.a 00:06:48.811 SYMLINK libspdk_keyring.so 00:06:48.811 SO libspdk_trace.so.11.0 00:06:48.811 SYMLINK libspdk_trace.so 00:06:49.069 CC lib/thread/thread.o 00:06:49.069 CC lib/thread/iobuf.o 00:06:49.069 CC lib/sock/sock.o 00:06:49.069 CC lib/sock/sock_rpc.o 00:06:49.635 LIB libspdk_sock.a 00:06:49.892 SO libspdk_sock.so.10.0 00:06:49.892 SYMLINK libspdk_sock.so 00:06:50.150 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:50.150 CC lib/nvme/nvme_ctrlr.o 00:06:50.150 CC lib/nvme/nvme_fabric.o 00:06:50.150 CC lib/nvme/nvme_ns.o 00:06:50.150 CC lib/nvme/nvme_ns_cmd.o 00:06:50.150 CC lib/nvme/nvme_pcie_common.o 00:06:50.150 CC lib/nvme/nvme_qpair.o 00:06:50.150 CC lib/nvme/nvme_pcie.o 00:06:50.150 CC lib/nvme/nvme.o 00:06:51.084 CC lib/nvme/nvme_quirks.o 00:06:51.084 CC lib/nvme/nvme_transport.o 00:06:51.084 LIB libspdk_thread.a 00:06:51.343 SO libspdk_thread.so.11.0 00:06:51.343 CC lib/nvme/nvme_discovery.o 00:06:51.343 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:51.343 SYMLINK libspdk_thread.so 00:06:51.343 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:51.600 CC lib/nvme/nvme_tcp.o 00:06:51.600 CC lib/nvme/nvme_opal.o 00:06:51.859 CC lib/nvme/nvme_io_msg.o 00:06:51.859 CC lib/nvme/nvme_poll_group.o 00:06:52.115 CC lib/nvme/nvme_zns.o 00:06:52.115 CC lib/accel/accel.o 00:06:52.372 CC lib/blob/blobstore.o 00:06:52.372 CC lib/init/json_config.o 00:06:52.372 CC lib/init/subsystem.o 00:06:52.628 CC lib/nvme/nvme_stubs.o 00:06:52.628 CC lib/accel/accel_rpc.o 00:06:52.628 CC lib/init/subsystem_rpc.o 00:06:52.628 CC lib/init/rpc.o 00:06:52.886 CC lib/nvme/nvme_auth.o 00:06:52.886 LIB libspdk_init.a 00:06:52.886 SO libspdk_init.so.6.0 00:06:52.886 CC lib/fsdev/fsdev.o 00:06:52.886 CC lib/virtio/virtio.o 00:06:53.145 SYMLINK libspdk_init.so 00:06:53.145 CC lib/blob/request.o 00:06:53.145 CC lib/accel/accel_sw.o 00:06:53.401 CC lib/virtio/virtio_vhost_user.o 00:06:53.401 CC lib/virtio/virtio_vfio_user.o 00:06:53.659 CC lib/virtio/virtio_pci.o 00:06:53.659 CC lib/blob/zeroes.o 00:06:53.659 CC lib/event/app.o 00:06:53.659 CC lib/blob/blob_bs_dev.o 00:06:53.659 CC lib/fsdev/fsdev_io.o 00:06:53.924 CC lib/nvme/nvme_cuse.o 00:06:53.924 LIB libspdk_accel.a 00:06:53.924 LIB libspdk_virtio.a 00:06:53.924 SO libspdk_virtio.so.7.0 00:06:53.924 SO libspdk_accel.so.16.0 00:06:53.924 CC lib/nvme/nvme_rdma.o 00:06:53.924 CC lib/event/reactor.o 00:06:54.182 SYMLINK libspdk_virtio.so 00:06:54.182 CC lib/event/log_rpc.o 00:06:54.182 CC lib/event/app_rpc.o 00:06:54.182 SYMLINK libspdk_accel.so 00:06:54.182 CC lib/fsdev/fsdev_rpc.o 00:06:54.182 CC lib/event/scheduler_static.o 00:06:54.439 LIB libspdk_fsdev.a 00:06:54.439 CC lib/bdev/bdev.o 00:06:54.439 CC lib/bdev/bdev_rpc.o 00:06:54.439 CC lib/bdev/bdev_zone.o 00:06:54.439 CC lib/bdev/part.o 00:06:54.439 SO libspdk_fsdev.so.2.0 00:06:54.439 CC lib/bdev/scsi_nvme.o 00:06:54.698 SYMLINK libspdk_fsdev.so 00:06:54.957 LIB libspdk_event.a 00:06:54.957 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:54.957 SO libspdk_event.so.14.0 00:06:54.957 SYMLINK libspdk_event.so 00:06:55.615 LIB libspdk_nvme.a 00:06:55.875 LIB libspdk_fuse_dispatcher.a 00:06:55.875 SO libspdk_nvme.so.15.0 00:06:55.875 SO libspdk_fuse_dispatcher.so.1.0 00:06:56.133 SYMLINK libspdk_fuse_dispatcher.so 00:06:56.391 SYMLINK libspdk_nvme.so 00:06:56.391 LIB libspdk_blob.a 00:06:56.391 SO libspdk_blob.so.12.0 00:06:56.650 SYMLINK libspdk_blob.so 00:06:56.907 CC lib/blobfs/tree.o 00:06:56.907 CC lib/blobfs/blobfs.o 00:06:56.907 CC lib/lvol/lvol.o 00:06:57.839 LIB libspdk_blobfs.a 00:06:57.839 SO libspdk_blobfs.so.11.0 00:06:57.839 SYMLINK libspdk_blobfs.so 00:06:58.097 LIB libspdk_lvol.a 00:06:58.097 LIB libspdk_bdev.a 00:06:58.097 SO libspdk_lvol.so.11.0 00:06:58.097 SYMLINK libspdk_lvol.so 00:06:58.097 SO libspdk_bdev.so.17.0 00:06:58.097 SYMLINK libspdk_bdev.so 00:06:58.354 CC lib/scsi/dev.o 00:06:58.354 CC lib/scsi/lun.o 00:06:58.354 CC lib/scsi/port.o 00:06:58.354 CC lib/scsi/scsi.o 00:06:58.354 CC lib/scsi/scsi_bdev.o 00:06:58.354 CC lib/scsi/scsi_pr.o 00:06:58.354 CC lib/ftl/ftl_core.o 00:06:58.354 CC lib/nvmf/ctrlr.o 00:06:58.354 CC lib/ublk/ublk.o 00:06:58.354 CC lib/nbd/nbd.o 00:06:58.647 CC lib/ublk/ublk_rpc.o 00:06:58.647 CC lib/ftl/ftl_init.o 00:06:58.906 CC lib/ftl/ftl_layout.o 00:06:58.906 CC lib/nvmf/ctrlr_discovery.o 00:06:58.906 CC lib/nvmf/ctrlr_bdev.o 00:06:58.906 CC lib/scsi/scsi_rpc.o 00:06:58.906 CC lib/ftl/ftl_debug.o 00:06:59.164 CC lib/ftl/ftl_io.o 00:06:59.164 CC lib/scsi/task.o 00:06:59.164 CC lib/ftl/ftl_sb.o 00:06:59.164 CC lib/nbd/nbd_rpc.o 00:06:59.421 CC lib/ftl/ftl_l2p.o 00:06:59.421 LIB libspdk_scsi.a 00:06:59.421 CC lib/ftl/ftl_l2p_flat.o 00:06:59.421 SO libspdk_scsi.so.9.0 00:06:59.679 LIB libspdk_nbd.a 00:06:59.679 CC lib/ftl/ftl_nv_cache.o 00:06:59.679 CC lib/nvmf/subsystem.o 00:06:59.679 SO libspdk_nbd.so.7.0 00:06:59.679 CC lib/nvmf/nvmf.o 00:06:59.679 SYMLINK libspdk_scsi.so 00:06:59.679 LIB libspdk_ublk.a 00:06:59.679 CC lib/ftl/ftl_band.o 00:06:59.679 SO libspdk_ublk.so.3.0 00:06:59.679 SYMLINK libspdk_nbd.so 00:06:59.679 CC lib/nvmf/nvmf_rpc.o 00:06:59.679 CC lib/nvmf/transport.o 00:06:59.938 SYMLINK libspdk_ublk.so 00:06:59.938 CC lib/nvmf/tcp.o 00:07:00.196 CC lib/iscsi/conn.o 00:07:00.196 CC lib/nvmf/stubs.o 00:07:00.196 CC lib/ftl/ftl_band_ops.o 00:07:00.455 CC lib/ftl/ftl_writer.o 00:07:00.455 CC lib/ftl/ftl_rq.o 00:07:00.713 CC lib/ftl/ftl_reloc.o 00:07:00.713 CC lib/iscsi/init_grp.o 00:07:00.713 CC lib/ftl/ftl_l2p_cache.o 00:07:00.970 CC lib/ftl/ftl_p2l.o 00:07:00.970 CC lib/nvmf/mdns_server.o 00:07:00.970 CC lib/ftl/ftl_p2l_log.o 00:07:00.970 CC lib/iscsi/iscsi.o 00:07:00.970 CC lib/ftl/mngt/ftl_mngt.o 00:07:01.536 CC lib/iscsi/param.o 00:07:01.536 CC lib/iscsi/portal_grp.o 00:07:01.536 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:01.536 CC lib/nvmf/rdma.o 00:07:01.536 CC lib/vhost/vhost.o 00:07:01.536 CC lib/nvmf/auth.o 00:07:01.794 CC lib/vhost/vhost_rpc.o 00:07:02.052 CC lib/iscsi/tgt_node.o 00:07:02.052 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:02.052 CC lib/iscsi/iscsi_subsystem.o 00:07:02.052 CC lib/iscsi/iscsi_rpc.o 00:07:02.310 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:02.569 CC lib/vhost/vhost_scsi.o 00:07:02.569 CC lib/vhost/vhost_blk.o 00:07:02.569 CC lib/iscsi/task.o 00:07:02.569 CC lib/vhost/rte_vhost_user.o 00:07:02.569 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:02.828 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:02.828 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:02.828 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:02.828 LIB libspdk_iscsi.a 00:07:03.090 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:03.090 SO libspdk_iscsi.so.8.0 00:07:03.090 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:03.370 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:03.370 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:03.370 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:03.370 SYMLINK libspdk_iscsi.so 00:07:03.370 CC lib/ftl/utils/ftl_conf.o 00:07:03.370 CC lib/ftl/utils/ftl_md.o 00:07:03.626 CC lib/ftl/utils/ftl_mempool.o 00:07:03.626 CC lib/ftl/utils/ftl_bitmap.o 00:07:03.626 CC lib/ftl/utils/ftl_property.o 00:07:03.626 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:03.626 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:03.883 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:03.883 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:03.883 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:03.883 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:03.883 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:04.140 LIB libspdk_nvmf.a 00:07:04.140 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:04.140 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:04.140 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:04.140 SO libspdk_nvmf.so.20.0 00:07:04.140 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:04.140 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:04.140 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:04.398 CC lib/ftl/base/ftl_base_dev.o 00:07:04.398 CC lib/ftl/base/ftl_base_bdev.o 00:07:04.398 CC lib/ftl/ftl_trace.o 00:07:04.398 SYMLINK libspdk_nvmf.so 00:07:04.398 LIB libspdk_vhost.a 00:07:04.656 SO libspdk_vhost.so.8.0 00:07:04.656 SYMLINK libspdk_vhost.so 00:07:04.656 LIB libspdk_ftl.a 00:07:04.915 SO libspdk_ftl.so.9.0 00:07:05.478 SYMLINK libspdk_ftl.so 00:07:05.734 CC module/env_dpdk/env_dpdk_rpc.o 00:07:05.734 CC module/sock/posix/posix.o 00:07:05.734 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:05.734 CC module/fsdev/aio/fsdev_aio.o 00:07:05.734 CC module/accel/error/accel_error.o 00:07:05.734 CC module/keyring/file/keyring.o 00:07:05.991 CC module/keyring/linux/keyring.o 00:07:05.991 CC module/blob/bdev/blob_bdev.o 00:07:05.991 CC module/accel/ioat/accel_ioat.o 00:07:05.991 CC module/accel/dsa/accel_dsa.o 00:07:05.991 LIB libspdk_env_dpdk_rpc.a 00:07:05.991 SO libspdk_env_dpdk_rpc.so.6.0 00:07:05.991 SYMLINK libspdk_env_dpdk_rpc.so 00:07:05.991 CC module/accel/ioat/accel_ioat_rpc.o 00:07:05.991 CC module/keyring/linux/keyring_rpc.o 00:07:05.991 CC module/keyring/file/keyring_rpc.o 00:07:05.991 CC module/accel/error/accel_error_rpc.o 00:07:06.248 LIB libspdk_scheduler_dynamic.a 00:07:06.248 SO libspdk_scheduler_dynamic.so.4.0 00:07:06.248 LIB libspdk_blob_bdev.a 00:07:06.248 LIB libspdk_accel_ioat.a 00:07:06.248 SO libspdk_blob_bdev.so.12.0 00:07:06.248 SYMLINK libspdk_scheduler_dynamic.so 00:07:06.248 SO libspdk_accel_ioat.so.6.0 00:07:06.248 LIB libspdk_accel_error.a 00:07:06.248 LIB libspdk_keyring_linux.a 00:07:06.248 SYMLINK libspdk_blob_bdev.so 00:07:06.248 LIB libspdk_keyring_file.a 00:07:06.248 SO libspdk_accel_error.so.2.0 00:07:06.248 CC module/accel/iaa/accel_iaa.o 00:07:06.248 CC module/accel/dsa/accel_dsa_rpc.o 00:07:06.504 SYMLINK libspdk_accel_ioat.so 00:07:06.504 SO libspdk_keyring_linux.so.1.0 00:07:06.504 SO libspdk_keyring_file.so.2.0 00:07:06.504 SYMLINK libspdk_accel_error.so 00:07:06.504 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:06.504 SYMLINK libspdk_keyring_file.so 00:07:06.504 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:06.504 SYMLINK libspdk_keyring_linux.so 00:07:06.504 LIB libspdk_accel_dsa.a 00:07:06.761 CC module/accel/iaa/accel_iaa_rpc.o 00:07:06.761 SO libspdk_accel_dsa.so.5.0 00:07:06.761 CC module/fsdev/aio/linux_aio_mgr.o 00:07:06.761 LIB libspdk_scheduler_dpdk_governor.a 00:07:06.761 SYMLINK libspdk_accel_dsa.so 00:07:06.761 CC module/bdev/error/vbdev_error.o 00:07:06.761 CC module/bdev/delay/vbdev_delay.o 00:07:06.761 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:06.761 CC module/bdev/gpt/gpt.o 00:07:06.761 LIB libspdk_accel_iaa.a 00:07:06.761 CC module/blobfs/bdev/blobfs_bdev.o 00:07:06.761 LIB libspdk_sock_posix.a 00:07:06.761 SO libspdk_accel_iaa.so.3.0 00:07:06.761 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:06.761 CC module/bdev/gpt/vbdev_gpt.o 00:07:07.018 SO libspdk_sock_posix.so.6.0 00:07:07.018 CC module/scheduler/gscheduler/gscheduler.o 00:07:07.018 LIB libspdk_fsdev_aio.a 00:07:07.018 CC module/bdev/lvol/vbdev_lvol.o 00:07:07.018 SO libspdk_fsdev_aio.so.1.0 00:07:07.018 SYMLINK libspdk_accel_iaa.so 00:07:07.018 SYMLINK libspdk_sock_posix.so 00:07:07.018 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:07.018 CC module/bdev/error/vbdev_error_rpc.o 00:07:07.018 SYMLINK libspdk_fsdev_aio.so 00:07:07.018 LIB libspdk_scheduler_gscheduler.a 00:07:07.018 SO libspdk_scheduler_gscheduler.so.4.0 00:07:07.277 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:07.277 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:07.277 LIB libspdk_bdev_gpt.a 00:07:07.277 CC module/bdev/malloc/bdev_malloc.o 00:07:07.277 CC module/bdev/null/bdev_null.o 00:07:07.277 SYMLINK libspdk_scheduler_gscheduler.so 00:07:07.277 SO libspdk_bdev_gpt.so.6.0 00:07:07.277 LIB libspdk_bdev_delay.a 00:07:07.277 CC module/bdev/nvme/bdev_nvme.o 00:07:07.277 SO libspdk_bdev_delay.so.6.0 00:07:07.277 LIB libspdk_bdev_error.a 00:07:07.277 SYMLINK libspdk_bdev_gpt.so 00:07:07.277 LIB libspdk_blobfs_bdev.a 00:07:07.277 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:07.534 SO libspdk_blobfs_bdev.so.6.0 00:07:07.534 SO libspdk_bdev_error.so.6.0 00:07:07.534 SYMLINK libspdk_bdev_delay.so 00:07:07.534 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:07.534 CC module/bdev/passthru/vbdev_passthru.o 00:07:07.534 CC module/bdev/nvme/nvme_rpc.o 00:07:07.534 SYMLINK libspdk_blobfs_bdev.so 00:07:07.534 CC module/bdev/null/bdev_null_rpc.o 00:07:07.534 SYMLINK libspdk_bdev_error.so 00:07:07.534 CC module/bdev/nvme/bdev_mdns_client.o 00:07:07.534 LIB libspdk_bdev_lvol.a 00:07:07.534 CC module/bdev/nvme/vbdev_opal.o 00:07:07.534 SO libspdk_bdev_lvol.so.6.0 00:07:07.791 LIB libspdk_bdev_malloc.a 00:07:07.791 SYMLINK libspdk_bdev_lvol.so 00:07:07.791 LIB libspdk_bdev_null.a 00:07:07.791 SO libspdk_bdev_malloc.so.6.0 00:07:07.791 CC module/bdev/raid/bdev_raid.o 00:07:07.791 SO libspdk_bdev_null.so.6.0 00:07:07.791 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:07.791 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:07.791 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:07.791 SYMLINK libspdk_bdev_malloc.so 00:07:07.791 SYMLINK libspdk_bdev_null.so 00:07:07.791 CC module/bdev/raid/bdev_raid_rpc.o 00:07:08.049 CC module/bdev/split/vbdev_split.o 00:07:08.049 CC module/bdev/raid/bdev_raid_sb.o 00:07:08.049 LIB libspdk_bdev_passthru.a 00:07:08.049 SO libspdk_bdev_passthru.so.6.0 00:07:08.049 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:08.049 CC module/bdev/raid/raid0.o 00:07:08.049 CC module/bdev/raid/raid1.o 00:07:08.049 CC module/bdev/raid/concat.o 00:07:08.049 SYMLINK libspdk_bdev_passthru.so 00:07:08.307 CC module/bdev/split/vbdev_split_rpc.o 00:07:08.307 CC module/bdev/aio/bdev_aio.o 00:07:08.307 CC module/bdev/ftl/bdev_ftl.o 00:07:08.307 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:08.307 LIB libspdk_bdev_split.a 00:07:08.307 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:08.565 SO libspdk_bdev_split.so.6.0 00:07:08.565 CC module/bdev/iscsi/bdev_iscsi.o 00:07:08.565 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:08.565 SYMLINK libspdk_bdev_split.so 00:07:08.565 CC module/bdev/aio/bdev_aio_rpc.o 00:07:08.565 LIB libspdk_bdev_zone_block.a 00:07:08.565 SO libspdk_bdev_zone_block.so.6.0 00:07:08.822 LIB libspdk_bdev_ftl.a 00:07:08.822 SYMLINK libspdk_bdev_zone_block.so 00:07:08.822 SO libspdk_bdev_ftl.so.6.0 00:07:08.822 LIB libspdk_bdev_aio.a 00:07:08.822 SO libspdk_bdev_aio.so.6.0 00:07:08.822 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:08.822 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:08.822 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:08.822 SYMLINK libspdk_bdev_ftl.so 00:07:08.822 LIB libspdk_bdev_iscsi.a 00:07:08.822 SYMLINK libspdk_bdev_aio.so 00:07:08.822 SO libspdk_bdev_iscsi.so.6.0 00:07:08.822 LIB libspdk_bdev_raid.a 00:07:09.081 SYMLINK libspdk_bdev_iscsi.so 00:07:09.081 SO libspdk_bdev_raid.so.6.0 00:07:09.081 SYMLINK libspdk_bdev_raid.so 00:07:09.339 LIB libspdk_bdev_virtio.a 00:07:09.339 SO libspdk_bdev_virtio.so.6.0 00:07:09.597 SYMLINK libspdk_bdev_virtio.so 00:07:10.531 LIB libspdk_bdev_nvme.a 00:07:10.790 SO libspdk_bdev_nvme.so.7.1 00:07:10.790 SYMLINK libspdk_bdev_nvme.so 00:07:11.356 CC module/event/subsystems/iobuf/iobuf.o 00:07:11.356 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:11.356 CC module/event/subsystems/fsdev/fsdev.o 00:07:11.356 CC module/event/subsystems/sock/sock.o 00:07:11.356 CC module/event/subsystems/keyring/keyring.o 00:07:11.356 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:11.356 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:11.356 CC module/event/subsystems/scheduler/scheduler.o 00:07:11.356 CC module/event/subsystems/vmd/vmd.o 00:07:11.356 LIB libspdk_event_keyring.a 00:07:11.356 LIB libspdk_event_sock.a 00:07:11.356 SO libspdk_event_keyring.so.1.0 00:07:11.356 LIB libspdk_event_fsdev.a 00:07:11.615 SO libspdk_event_sock.so.5.0 00:07:11.615 LIB libspdk_event_iobuf.a 00:07:11.615 LIB libspdk_event_vmd.a 00:07:11.615 LIB libspdk_event_scheduler.a 00:07:11.615 SO libspdk_event_fsdev.so.1.0 00:07:11.615 LIB libspdk_event_vhost_blk.a 00:07:11.615 SO libspdk_event_scheduler.so.4.0 00:07:11.615 SYMLINK libspdk_event_keyring.so 00:07:11.615 SYMLINK libspdk_event_sock.so 00:07:11.615 SO libspdk_event_vmd.so.6.0 00:07:11.615 SO libspdk_event_iobuf.so.3.0 00:07:11.615 SO libspdk_event_vhost_blk.so.3.0 00:07:11.615 SYMLINK libspdk_event_fsdev.so 00:07:11.615 SYMLINK libspdk_event_scheduler.so 00:07:11.615 SYMLINK libspdk_event_iobuf.so 00:07:11.615 SYMLINK libspdk_event_vmd.so 00:07:11.615 SYMLINK libspdk_event_vhost_blk.so 00:07:11.874 CC module/event/subsystems/accel/accel.o 00:07:12.132 LIB libspdk_event_accel.a 00:07:12.132 SO libspdk_event_accel.so.6.0 00:07:12.132 SYMLINK libspdk_event_accel.so 00:07:12.391 CC module/event/subsystems/bdev/bdev.o 00:07:12.651 LIB libspdk_event_bdev.a 00:07:12.651 SO libspdk_event_bdev.so.6.0 00:07:12.651 SYMLINK libspdk_event_bdev.so 00:07:12.910 CC module/event/subsystems/scsi/scsi.o 00:07:12.910 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:12.910 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:12.910 CC module/event/subsystems/ublk/ublk.o 00:07:12.910 CC module/event/subsystems/nbd/nbd.o 00:07:13.168 LIB libspdk_event_nbd.a 00:07:13.168 LIB libspdk_event_ublk.a 00:07:13.168 LIB libspdk_event_scsi.a 00:07:13.168 SO libspdk_event_nbd.so.6.0 00:07:13.168 SO libspdk_event_ublk.so.3.0 00:07:13.168 SO libspdk_event_scsi.so.6.0 00:07:13.168 SYMLINK libspdk_event_ublk.so 00:07:13.168 SYMLINK libspdk_event_nbd.so 00:07:13.168 LIB libspdk_event_nvmf.a 00:07:13.426 SYMLINK libspdk_event_scsi.so 00:07:13.426 SO libspdk_event_nvmf.so.6.0 00:07:13.426 SYMLINK libspdk_event_nvmf.so 00:07:13.684 CC module/event/subsystems/iscsi/iscsi.o 00:07:13.684 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:13.684 LIB libspdk_event_iscsi.a 00:07:13.684 LIB libspdk_event_vhost_scsi.a 00:07:13.684 SO libspdk_event_iscsi.so.6.0 00:07:13.942 SO libspdk_event_vhost_scsi.so.3.0 00:07:13.942 SYMLINK libspdk_event_iscsi.so 00:07:13.942 SYMLINK libspdk_event_vhost_scsi.so 00:07:13.942 SO libspdk.so.6.0 00:07:13.942 SYMLINK libspdk.so 00:07:14.200 CC app/trace_record/trace_record.o 00:07:14.458 CXX app/trace/trace.o 00:07:14.458 CC app/iscsi_tgt/iscsi_tgt.o 00:07:14.458 CC app/nvmf_tgt/nvmf_main.o 00:07:14.458 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:14.458 CC examples/util/zipf/zipf.o 00:07:14.458 CC app/spdk_tgt/spdk_tgt.o 00:07:14.458 CC test/thread/poller_perf/poller_perf.o 00:07:14.458 CC examples/ioat/perf/perf.o 00:07:14.716 LINK spdk_trace_record 00:07:14.716 LINK nvmf_tgt 00:07:14.716 LINK iscsi_tgt 00:07:14.716 LINK poller_perf 00:07:14.716 LINK zipf 00:07:14.716 LINK interrupt_tgt 00:07:14.975 LINK spdk_tgt 00:07:14.975 LINK ioat_perf 00:07:15.233 CC examples/ioat/verify/verify.o 00:07:15.233 TEST_HEADER include/spdk/accel.h 00:07:15.233 TEST_HEADER include/spdk/accel_module.h 00:07:15.233 TEST_HEADER include/spdk/assert.h 00:07:15.233 TEST_HEADER include/spdk/barrier.h 00:07:15.233 TEST_HEADER include/spdk/base64.h 00:07:15.233 TEST_HEADER include/spdk/bdev.h 00:07:15.233 TEST_HEADER include/spdk/bdev_module.h 00:07:15.233 TEST_HEADER include/spdk/bdev_zone.h 00:07:15.233 TEST_HEADER include/spdk/bit_array.h 00:07:15.233 TEST_HEADER include/spdk/bit_pool.h 00:07:15.233 TEST_HEADER include/spdk/blob_bdev.h 00:07:15.233 LINK spdk_trace 00:07:15.233 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:15.233 TEST_HEADER include/spdk/blobfs.h 00:07:15.233 TEST_HEADER include/spdk/blob.h 00:07:15.233 TEST_HEADER include/spdk/conf.h 00:07:15.233 TEST_HEADER include/spdk/config.h 00:07:15.233 TEST_HEADER include/spdk/cpuset.h 00:07:15.233 TEST_HEADER include/spdk/crc16.h 00:07:15.233 TEST_HEADER include/spdk/crc32.h 00:07:15.233 TEST_HEADER include/spdk/crc64.h 00:07:15.233 TEST_HEADER include/spdk/dif.h 00:07:15.233 TEST_HEADER include/spdk/dma.h 00:07:15.233 TEST_HEADER include/spdk/endian.h 00:07:15.233 TEST_HEADER include/spdk/env_dpdk.h 00:07:15.233 TEST_HEADER include/spdk/env.h 00:07:15.233 TEST_HEADER include/spdk/event.h 00:07:15.233 TEST_HEADER include/spdk/fd_group.h 00:07:15.233 TEST_HEADER include/spdk/fd.h 00:07:15.233 TEST_HEADER include/spdk/file.h 00:07:15.233 TEST_HEADER include/spdk/fsdev.h 00:07:15.233 TEST_HEADER include/spdk/fsdev_module.h 00:07:15.233 TEST_HEADER include/spdk/ftl.h 00:07:15.233 CC test/dma/test_dma/test_dma.o 00:07:15.233 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:15.233 TEST_HEADER include/spdk/gpt_spec.h 00:07:15.233 TEST_HEADER include/spdk/hexlify.h 00:07:15.233 TEST_HEADER include/spdk/histogram_data.h 00:07:15.233 TEST_HEADER include/spdk/idxd.h 00:07:15.233 TEST_HEADER include/spdk/idxd_spec.h 00:07:15.233 TEST_HEADER include/spdk/init.h 00:07:15.233 TEST_HEADER include/spdk/ioat.h 00:07:15.233 TEST_HEADER include/spdk/ioat_spec.h 00:07:15.233 TEST_HEADER include/spdk/iscsi_spec.h 00:07:15.233 TEST_HEADER include/spdk/json.h 00:07:15.233 TEST_HEADER include/spdk/jsonrpc.h 00:07:15.233 TEST_HEADER include/spdk/keyring.h 00:07:15.233 TEST_HEADER include/spdk/keyring_module.h 00:07:15.233 TEST_HEADER include/spdk/likely.h 00:07:15.233 TEST_HEADER include/spdk/log.h 00:07:15.233 TEST_HEADER include/spdk/lvol.h 00:07:15.233 TEST_HEADER include/spdk/md5.h 00:07:15.233 TEST_HEADER include/spdk/memory.h 00:07:15.233 TEST_HEADER include/spdk/mmio.h 00:07:15.233 TEST_HEADER include/spdk/nbd.h 00:07:15.233 TEST_HEADER include/spdk/net.h 00:07:15.233 TEST_HEADER include/spdk/notify.h 00:07:15.233 CC test/app/bdev_svc/bdev_svc.o 00:07:15.233 TEST_HEADER include/spdk/nvme.h 00:07:15.233 TEST_HEADER include/spdk/nvme_intel.h 00:07:15.233 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:15.233 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:15.233 TEST_HEADER include/spdk/nvme_spec.h 00:07:15.233 TEST_HEADER include/spdk/nvme_zns.h 00:07:15.233 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:15.233 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:15.233 TEST_HEADER include/spdk/nvmf.h 00:07:15.233 TEST_HEADER include/spdk/nvmf_spec.h 00:07:15.491 TEST_HEADER include/spdk/nvmf_transport.h 00:07:15.491 TEST_HEADER include/spdk/opal.h 00:07:15.491 CC examples/thread/thread/thread_ex.o 00:07:15.491 TEST_HEADER include/spdk/opal_spec.h 00:07:15.491 TEST_HEADER include/spdk/pci_ids.h 00:07:15.491 TEST_HEADER include/spdk/pipe.h 00:07:15.491 TEST_HEADER include/spdk/queue.h 00:07:15.492 TEST_HEADER include/spdk/reduce.h 00:07:15.492 TEST_HEADER include/spdk/rpc.h 00:07:15.492 TEST_HEADER include/spdk/scheduler.h 00:07:15.492 TEST_HEADER include/spdk/scsi.h 00:07:15.492 TEST_HEADER include/spdk/scsi_spec.h 00:07:15.492 TEST_HEADER include/spdk/sock.h 00:07:15.492 TEST_HEADER include/spdk/stdinc.h 00:07:15.492 TEST_HEADER include/spdk/string.h 00:07:15.492 TEST_HEADER include/spdk/thread.h 00:07:15.492 TEST_HEADER include/spdk/trace.h 00:07:15.492 TEST_HEADER include/spdk/trace_parser.h 00:07:15.492 TEST_HEADER include/spdk/tree.h 00:07:15.492 TEST_HEADER include/spdk/ublk.h 00:07:15.492 TEST_HEADER include/spdk/util.h 00:07:15.492 TEST_HEADER include/spdk/uuid.h 00:07:15.492 LINK verify 00:07:15.492 TEST_HEADER include/spdk/version.h 00:07:15.492 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:15.492 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:15.492 TEST_HEADER include/spdk/vhost.h 00:07:15.492 TEST_HEADER include/spdk/vmd.h 00:07:15.492 TEST_HEADER include/spdk/xor.h 00:07:15.492 TEST_HEADER include/spdk/zipf.h 00:07:15.492 CXX test/cpp_headers/accel.o 00:07:15.492 CC test/env/mem_callbacks/mem_callbacks.o 00:07:15.492 CC examples/sock/hello_world/hello_sock.o 00:07:15.492 CC test/event/event_perf/event_perf.o 00:07:15.492 LINK bdev_svc 00:07:15.492 CC app/spdk_lspci/spdk_lspci.o 00:07:15.749 CXX test/cpp_headers/accel_module.o 00:07:15.749 LINK thread 00:07:15.749 LINK event_perf 00:07:15.749 LINK hello_sock 00:07:15.749 CC test/event/reactor/reactor.o 00:07:15.749 LINK spdk_lspci 00:07:15.749 LINK test_dma 00:07:15.749 CXX test/cpp_headers/assert.o 00:07:16.006 CXX test/cpp_headers/barrier.o 00:07:16.006 LINK reactor 00:07:16.006 CC test/app/histogram_perf/histogram_perf.o 00:07:16.263 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:16.263 CXX test/cpp_headers/base64.o 00:07:16.263 CXX test/cpp_headers/bdev.o 00:07:16.263 LINK mem_callbacks 00:07:16.263 CC app/spdk_nvme_perf/perf.o 00:07:16.263 CC app/spdk_nvme_identify/identify.o 00:07:16.521 CC test/event/reactor_perf/reactor_perf.o 00:07:16.521 LINK histogram_perf 00:07:16.521 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:16.521 CXX test/cpp_headers/bdev_module.o 00:07:16.521 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:16.521 CC test/env/vtophys/vtophys.o 00:07:16.779 LINK nvme_fuzz 00:07:16.779 LINK reactor_perf 00:07:16.779 CC test/rpc_client/rpc_client_test.o 00:07:16.779 LINK env_dpdk_post_init 00:07:16.779 LINK vtophys 00:07:17.037 CXX test/cpp_headers/bdev_zone.o 00:07:17.037 LINK rpc_client_test 00:07:17.037 CC test/event/app_repeat/app_repeat.o 00:07:17.037 CC examples/vmd/lsvmd/lsvmd.o 00:07:17.037 CC test/env/memory/memory_ut.o 00:07:17.295 CC test/event/scheduler/scheduler.o 00:07:17.295 CXX test/cpp_headers/bit_array.o 00:07:17.295 LINK lsvmd 00:07:17.295 LINK app_repeat 00:07:17.295 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:17.553 CXX test/cpp_headers/bit_pool.o 00:07:17.553 CXX test/cpp_headers/blob_bdev.o 00:07:17.553 LINK scheduler 00:07:17.553 CC examples/vmd/led/led.o 00:07:17.553 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:17.553 LINK spdk_nvme_identify 00:07:17.811 LINK spdk_nvme_perf 00:07:17.811 CXX test/cpp_headers/blobfs_bdev.o 00:07:17.811 LINK led 00:07:17.811 CC app/spdk_nvme_discover/discovery_aer.o 00:07:18.069 CC app/spdk_top/spdk_top.o 00:07:18.069 CC test/app/jsoncat/jsoncat.o 00:07:18.069 CXX test/cpp_headers/blobfs.o 00:07:18.069 CC test/app/stub/stub.o 00:07:18.069 LINK spdk_nvme_discover 00:07:18.360 LINK vhost_fuzz 00:07:18.360 LINK jsoncat 00:07:18.360 CC app/vhost/vhost.o 00:07:18.360 CXX test/cpp_headers/blob.o 00:07:18.360 LINK stub 00:07:18.360 CXX test/cpp_headers/conf.o 00:07:18.619 CXX test/cpp_headers/config.o 00:07:18.619 LINK iscsi_fuzz 00:07:18.619 LINK memory_ut 00:07:18.619 LINK vhost 00:07:18.619 CXX test/cpp_headers/cpuset.o 00:07:18.619 CC app/spdk_dd/spdk_dd.o 00:07:18.876 CC app/fio/nvme/fio_plugin.o 00:07:18.876 CC test/env/pci/pci_ut.o 00:07:18.876 CXX test/cpp_headers/crc16.o 00:07:18.876 CC test/accel/dif/dif.o 00:07:19.134 CC examples/idxd/perf/perf.o 00:07:19.134 CXX test/cpp_headers/crc32.o 00:07:19.134 LINK spdk_dd 00:07:19.134 LINK spdk_top 00:07:19.394 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:19.394 CC examples/accel/perf/accel_perf.o 00:07:19.394 LINK pci_ut 00:07:19.394 CXX test/cpp_headers/crc64.o 00:07:19.651 LINK spdk_nvme 00:07:19.651 LINK idxd_perf 00:07:19.651 LINK hello_fsdev 00:07:19.908 CXX test/cpp_headers/dif.o 00:07:19.908 LINK dif 00:07:19.908 CXX test/cpp_headers/dma.o 00:07:19.908 CC test/blobfs/mkfs/mkfs.o 00:07:20.165 CC examples/blob/hello_world/hello_blob.o 00:07:20.165 LINK accel_perf 00:07:20.165 CXX test/cpp_headers/endian.o 00:07:20.165 CC app/fio/bdev/fio_plugin.o 00:07:20.165 CC examples/nvme/hello_world/hello_world.o 00:07:20.165 CC examples/nvme/reconnect/reconnect.o 00:07:20.423 CXX test/cpp_headers/env_dpdk.o 00:07:20.423 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:20.423 LINK mkfs 00:07:20.423 CC examples/nvme/arbitration/arbitration.o 00:07:20.423 LINK hello_blob 00:07:20.681 CXX test/cpp_headers/env.o 00:07:20.681 LINK hello_world 00:07:20.681 CC examples/bdev/hello_world/hello_bdev.o 00:07:20.681 LINK spdk_bdev 00:07:20.939 LINK reconnect 00:07:20.939 CXX test/cpp_headers/event.o 00:07:20.939 CC examples/blob/cli/blobcli.o 00:07:20.939 CC examples/bdev/bdevperf/bdevperf.o 00:07:20.939 CC examples/nvme/hotplug/hotplug.o 00:07:20.939 LINK arbitration 00:07:21.198 LINK hello_bdev 00:07:21.198 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:21.198 CXX test/cpp_headers/fd_group.o 00:07:21.198 LINK nvme_manage 00:07:21.198 CXX test/cpp_headers/fd.o 00:07:21.456 LINK hotplug 00:07:21.456 CC examples/nvme/abort/abort.o 00:07:21.715 LINK cmb_copy 00:07:21.715 CXX test/cpp_headers/file.o 00:07:21.715 CXX test/cpp_headers/fsdev.o 00:07:21.715 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:21.715 LINK blobcli 00:07:21.973 CXX test/cpp_headers/fsdev_module.o 00:07:21.973 CXX test/cpp_headers/ftl.o 00:07:21.973 CXX test/cpp_headers/fuse_dispatcher.o 00:07:21.973 LINK abort 00:07:22.231 CC test/lvol/esnap/esnap.o 00:07:22.231 LINK pmr_persistence 00:07:22.231 CXX test/cpp_headers/gpt_spec.o 00:07:22.231 LINK bdevperf 00:07:22.489 CC test/nvme/aer/aer.o 00:07:22.489 CXX test/cpp_headers/hexlify.o 00:07:22.489 CC test/bdev/bdevio/bdevio.o 00:07:22.489 CC test/nvme/reset/reset.o 00:07:22.489 CXX test/cpp_headers/histogram_data.o 00:07:22.489 CC test/nvme/sgl/sgl.o 00:07:22.489 CC test/nvme/e2edp/nvme_dp.o 00:07:22.748 CC test/nvme/overhead/overhead.o 00:07:22.748 CXX test/cpp_headers/idxd.o 00:07:22.748 LINK sgl 00:07:22.748 LINK nvme_dp 00:07:23.006 LINK aer 00:07:23.006 LINK bdevio 00:07:23.006 LINK reset 00:07:23.006 CXX test/cpp_headers/idxd_spec.o 00:07:23.006 CC examples/nvmf/nvmf/nvmf.o 00:07:23.280 LINK overhead 00:07:23.280 CC test/nvme/err_injection/err_injection.o 00:07:23.280 CXX test/cpp_headers/init.o 00:07:23.280 CC test/nvme/startup/startup.o 00:07:23.280 CC test/nvme/reserve/reserve.o 00:07:23.280 CC test/nvme/simple_copy/simple_copy.o 00:07:23.538 CXX test/cpp_headers/ioat.o 00:07:23.538 CC test/nvme/connect_stress/connect_stress.o 00:07:23.538 LINK startup 00:07:23.538 LINK err_injection 00:07:23.538 CXX test/cpp_headers/ioat_spec.o 00:07:23.795 LINK nvmf 00:07:23.795 CC test/nvme/boot_partition/boot_partition.o 00:07:23.795 LINK simple_copy 00:07:23.795 LINK reserve 00:07:23.795 CXX test/cpp_headers/iscsi_spec.o 00:07:23.795 LINK connect_stress 00:07:24.053 LINK boot_partition 00:07:24.053 CC test/nvme/compliance/nvme_compliance.o 00:07:24.053 CXX test/cpp_headers/json.o 00:07:24.053 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:24.053 CC test/nvme/fused_ordering/fused_ordering.o 00:07:24.054 CXX test/cpp_headers/jsonrpc.o 00:07:24.312 CXX test/cpp_headers/keyring.o 00:07:24.312 CXX test/cpp_headers/keyring_module.o 00:07:24.312 CXX test/cpp_headers/likely.o 00:07:24.312 CC test/nvme/cuse/cuse.o 00:07:24.312 CC test/nvme/fdp/fdp.o 00:07:24.312 LINK nvme_compliance 00:07:24.312 LINK fused_ordering 00:07:24.312 CXX test/cpp_headers/log.o 00:07:24.570 CXX test/cpp_headers/lvol.o 00:07:24.570 LINK doorbell_aers 00:07:24.570 CXX test/cpp_headers/md5.o 00:07:24.570 CXX test/cpp_headers/memory.o 00:07:24.570 CXX test/cpp_headers/mmio.o 00:07:24.570 CXX test/cpp_headers/nbd.o 00:07:24.570 CXX test/cpp_headers/net.o 00:07:24.570 CXX test/cpp_headers/notify.o 00:07:24.570 LINK fdp 00:07:24.828 CXX test/cpp_headers/nvme.o 00:07:24.828 CXX test/cpp_headers/nvme_intel.o 00:07:24.828 CXX test/cpp_headers/nvme_ocssd.o 00:07:24.828 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:24.828 CXX test/cpp_headers/nvme_spec.o 00:07:24.828 CXX test/cpp_headers/nvme_zns.o 00:07:24.828 CXX test/cpp_headers/nvmf_cmd.o 00:07:24.828 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:25.085 CXX test/cpp_headers/nvmf.o 00:07:25.085 CXX test/cpp_headers/nvmf_spec.o 00:07:25.085 CXX test/cpp_headers/nvmf_transport.o 00:07:25.085 CXX test/cpp_headers/opal.o 00:07:25.085 CXX test/cpp_headers/opal_spec.o 00:07:25.343 CXX test/cpp_headers/pci_ids.o 00:07:25.343 CXX test/cpp_headers/pipe.o 00:07:25.343 CXX test/cpp_headers/queue.o 00:07:25.343 CXX test/cpp_headers/reduce.o 00:07:25.343 CXX test/cpp_headers/rpc.o 00:07:25.343 CXX test/cpp_headers/scheduler.o 00:07:25.343 CXX test/cpp_headers/scsi.o 00:07:25.343 CXX test/cpp_headers/scsi_spec.o 00:07:25.601 CXX test/cpp_headers/sock.o 00:07:25.601 CXX test/cpp_headers/stdinc.o 00:07:25.601 CXX test/cpp_headers/string.o 00:07:25.601 CXX test/cpp_headers/thread.o 00:07:25.601 CXX test/cpp_headers/trace.o 00:07:25.601 CXX test/cpp_headers/trace_parser.o 00:07:25.601 CXX test/cpp_headers/tree.o 00:07:25.601 CXX test/cpp_headers/ublk.o 00:07:25.601 CXX test/cpp_headers/util.o 00:07:25.601 CXX test/cpp_headers/uuid.o 00:07:25.859 CXX test/cpp_headers/version.o 00:07:25.859 CXX test/cpp_headers/vfio_user_pci.o 00:07:25.859 CXX test/cpp_headers/vfio_user_spec.o 00:07:25.859 CXX test/cpp_headers/vhost.o 00:07:25.859 CXX test/cpp_headers/vmd.o 00:07:25.859 CXX test/cpp_headers/xor.o 00:07:25.859 CXX test/cpp_headers/zipf.o 00:07:26.426 LINK cuse 00:07:29.786 LINK esnap 00:07:29.786 00:07:29.786 real 1m49.429s 00:07:29.786 user 9m55.017s 00:07:29.786 sys 2m4.772s 00:07:29.786 13:01:18 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:29.786 13:01:18 make -- common/autotest_common.sh@10 -- $ set +x 00:07:29.786 ************************************ 00:07:29.786 END TEST make 00:07:29.786 ************************************ 00:07:29.786 13:01:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:29.786 13:01:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:29.786 13:01:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:29.786 13:01:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.786 13:01:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:29.786 13:01:18 -- pm/common@44 -- $ pid=5297 00:07:29.786 13:01:18 -- pm/common@50 -- $ kill -TERM 5297 00:07:29.786 13:01:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.786 13:01:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:29.786 13:01:18 -- pm/common@44 -- $ pid=5299 00:07:29.786 13:01:18 -- pm/common@50 -- $ kill -TERM 5299 00:07:29.786 13:01:18 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:29.786 13:01:18 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:30.044 13:01:18 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.044 13:01:18 -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.044 13:01:18 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.044 13:01:18 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.044 13:01:18 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.044 13:01:18 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.044 13:01:18 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.044 13:01:18 -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.044 13:01:18 -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.044 13:01:18 -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.044 13:01:18 -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.044 13:01:18 -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.044 13:01:18 -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.044 13:01:18 -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.044 13:01:18 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.044 13:01:18 -- scripts/common.sh@344 -- # case "$op" in 00:07:30.044 13:01:18 -- scripts/common.sh@345 -- # : 1 00:07:30.044 13:01:18 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.044 13:01:18 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.044 13:01:18 -- scripts/common.sh@365 -- # decimal 1 00:07:30.044 13:01:18 -- scripts/common.sh@353 -- # local d=1 00:07:30.044 13:01:18 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.044 13:01:18 -- scripts/common.sh@355 -- # echo 1 00:07:30.044 13:01:18 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.044 13:01:18 -- scripts/common.sh@366 -- # decimal 2 00:07:30.044 13:01:18 -- scripts/common.sh@353 -- # local d=2 00:07:30.044 13:01:18 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.044 13:01:18 -- scripts/common.sh@355 -- # echo 2 00:07:30.044 13:01:18 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.044 13:01:18 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.044 13:01:18 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.044 13:01:18 -- scripts/common.sh@368 -- # return 0 00:07:30.044 13:01:18 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.044 13:01:18 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.044 --rc genhtml_branch_coverage=1 00:07:30.044 --rc genhtml_function_coverage=1 00:07:30.044 --rc genhtml_legend=1 00:07:30.044 --rc geninfo_all_blocks=1 00:07:30.044 --rc geninfo_unexecuted_blocks=1 00:07:30.044 00:07:30.044 ' 00:07:30.044 13:01:18 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.044 --rc genhtml_branch_coverage=1 00:07:30.044 --rc genhtml_function_coverage=1 00:07:30.044 --rc genhtml_legend=1 00:07:30.044 --rc geninfo_all_blocks=1 00:07:30.044 --rc geninfo_unexecuted_blocks=1 00:07:30.044 00:07:30.044 ' 00:07:30.044 13:01:18 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.044 --rc genhtml_branch_coverage=1 00:07:30.044 --rc genhtml_function_coverage=1 00:07:30.044 --rc genhtml_legend=1 00:07:30.044 --rc geninfo_all_blocks=1 00:07:30.044 --rc geninfo_unexecuted_blocks=1 00:07:30.044 00:07:30.044 ' 00:07:30.044 13:01:18 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.044 --rc genhtml_branch_coverage=1 00:07:30.044 --rc genhtml_function_coverage=1 00:07:30.044 --rc genhtml_legend=1 00:07:30.044 --rc geninfo_all_blocks=1 00:07:30.044 --rc geninfo_unexecuted_blocks=1 00:07:30.044 00:07:30.044 ' 00:07:30.044 13:01:18 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.044 13:01:18 -- nvmf/common.sh@7 -- # uname -s 00:07:30.044 13:01:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.044 13:01:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.044 13:01:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.044 13:01:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.044 13:01:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.044 13:01:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.044 13:01:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.044 13:01:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.044 13:01:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.044 13:01:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.044 13:01:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:07:30.044 13:01:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:07:30.044 13:01:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.044 13:01:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.044 13:01:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.044 13:01:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.044 13:01:18 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.044 13:01:18 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.044 13:01:18 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.044 13:01:18 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.044 13:01:18 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.044 13:01:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.044 13:01:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.044 13:01:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.044 13:01:18 -- paths/export.sh@5 -- # export PATH 00:07:30.044 13:01:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.044 13:01:18 -- nvmf/common.sh@51 -- # : 0 00:07:30.044 13:01:18 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.044 13:01:18 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.044 13:01:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.044 13:01:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.044 13:01:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.044 13:01:18 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.044 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.044 13:01:18 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.044 13:01:18 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.044 13:01:18 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.044 13:01:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:30.044 13:01:18 -- spdk/autotest.sh@32 -- # uname -s 00:07:30.044 13:01:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:30.044 13:01:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:30.044 13:01:18 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:30.045 13:01:18 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:30.045 13:01:18 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:30.045 13:01:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:30.302 13:01:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:30.302 13:01:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:30.302 13:01:18 -- spdk/autotest.sh@48 -- # udevadm_pid=56328 00:07:30.302 13:01:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:30.302 13:01:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:30.302 13:01:18 -- pm/common@17 -- # local monitor 00:07:30.302 13:01:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:30.302 13:01:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:30.302 13:01:18 -- pm/common@21 -- # date +%s 00:07:30.302 13:01:18 -- pm/common@21 -- # date +%s 00:07:30.302 13:01:18 -- pm/common@25 -- # sleep 1 00:07:30.302 13:01:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733490078 00:07:30.302 13:01:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733490078 00:07:30.302 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733490078_collect-vmstat.pm.log 00:07:30.302 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733490078_collect-cpu-load.pm.log 00:07:31.234 13:01:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:31.234 13:01:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:31.234 13:01:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.234 13:01:19 -- common/autotest_common.sh@10 -- # set +x 00:07:31.234 13:01:19 -- spdk/autotest.sh@59 -- # create_test_list 00:07:31.234 13:01:19 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:31.234 13:01:19 -- common/autotest_common.sh@10 -- # set +x 00:07:31.234 13:01:19 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:31.234 13:01:19 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:31.234 13:01:19 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:31.234 13:01:19 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:31.234 13:01:19 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:31.234 13:01:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:31.234 13:01:19 -- common/autotest_common.sh@1457 -- # uname 00:07:31.234 13:01:19 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:31.234 13:01:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:31.234 13:01:19 -- common/autotest_common.sh@1477 -- # uname 00:07:31.234 13:01:19 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:31.234 13:01:19 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:31.234 13:01:19 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:31.491 lcov: LCOV version 1.15 00:07:31.491 13:01:19 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:49.566 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:49.566 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:07.642 13:01:56 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:07.642 13:01:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.642 13:01:56 -- common/autotest_common.sh@10 -- # set +x 00:08:07.642 13:01:56 -- spdk/autotest.sh@78 -- # rm -f 00:08:07.642 13:01:56 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:08.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:08.209 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:08.209 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:08.209 13:01:56 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:08.209 13:01:56 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:08.209 13:01:56 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:08.209 13:01:56 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:08:08.209 13:01:56 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:08:08.209 13:01:56 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:08:08.209 13:01:56 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:08.209 13:01:56 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:08:08.209 13:01:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:08.209 13:01:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:08:08.209 13:01:56 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:08.209 13:01:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:08.209 13:01:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:08.209 13:01:56 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:08.209 13:01:56 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:08:08.209 13:01:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:08.209 13:01:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:08:08.209 13:01:56 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:08.209 13:01:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:08.209 13:01:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:08.209 13:01:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:08.209 13:01:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:08:08.209 13:01:56 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:08:08.209 13:01:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:08.209 13:01:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:08.209 13:01:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:08.209 13:01:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:08:08.209 13:01:56 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:08:08.209 13:01:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:08.209 13:01:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:08.209 13:01:56 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:08.209 13:01:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:08.209 13:01:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:08.209 13:01:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:08.209 13:01:56 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:08.209 13:01:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:08.209 No valid GPT data, bailing 00:08:08.467 13:01:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:08.467 13:01:56 -- scripts/common.sh@394 -- # pt= 00:08:08.467 13:01:56 -- scripts/common.sh@395 -- # return 1 00:08:08.467 13:01:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:08.467 1+0 records in 00:08:08.467 1+0 records out 00:08:08.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040076 s, 262 MB/s 00:08:08.467 13:01:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:08.467 13:01:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:08.467 13:01:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:08.467 13:01:56 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:08.467 13:01:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:08.467 No valid GPT data, bailing 00:08:08.467 13:01:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:08.467 13:01:56 -- scripts/common.sh@394 -- # pt= 00:08:08.467 13:01:56 -- scripts/common.sh@395 -- # return 1 00:08:08.467 13:01:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:08.467 1+0 records in 00:08:08.467 1+0 records out 00:08:08.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461357 s, 227 MB/s 00:08:08.467 13:01:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:08.467 13:01:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:08.467 13:01:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:08:08.467 13:01:56 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:08:08.467 13:01:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:08.467 No valid GPT data, bailing 00:08:08.467 13:01:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:08.467 13:01:57 -- scripts/common.sh@394 -- # pt= 00:08:08.467 13:01:57 -- scripts/common.sh@395 -- # return 1 00:08:08.467 13:01:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:08.467 1+0 records in 00:08:08.467 1+0 records out 00:08:08.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434441 s, 241 MB/s 00:08:08.467 13:01:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:08.467 13:01:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:08.467 13:01:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:08:08.467 13:01:57 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:08:08.467 13:01:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:08.726 No valid GPT data, bailing 00:08:08.726 13:01:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:08.726 13:01:57 -- scripts/common.sh@394 -- # pt= 00:08:08.726 13:01:57 -- scripts/common.sh@395 -- # return 1 00:08:08.726 13:01:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:08.726 1+0 records in 00:08:08.726 1+0 records out 00:08:08.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415135 s, 253 MB/s 00:08:08.726 13:01:57 -- spdk/autotest.sh@105 -- # sync 00:08:08.726 13:01:57 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:08.726 13:01:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:08.726 13:01:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:10.629 13:01:59 -- spdk/autotest.sh@111 -- # uname -s 00:08:10.629 13:01:59 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:10.629 13:01:59 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:10.629 13:01:59 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:11.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:11.196 Hugepages 00:08:11.196 node hugesize free / total 00:08:11.196 node0 1048576kB 0 / 0 00:08:11.196 node0 2048kB 0 / 0 00:08:11.196 00:08:11.196 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:11.196 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:11.455 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:11.455 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:11.455 13:01:59 -- spdk/autotest.sh@117 -- # uname -s 00:08:11.455 13:01:59 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:11.455 13:01:59 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:11.455 13:01:59 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:12.023 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:12.281 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.281 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.281 13:02:00 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:13.654 13:02:01 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:13.654 13:02:01 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:13.654 13:02:01 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:13.654 13:02:01 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:13.654 13:02:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:13.654 13:02:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:13.654 13:02:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:13.654 13:02:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:13.654 13:02:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:13.654 13:02:01 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:13.654 13:02:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:13.654 13:02:01 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:13.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:13.654 Waiting for block devices as requested 00:08:13.654 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:13.912 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:13.912 13:02:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:13.912 13:02:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:13.912 13:02:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:13.912 13:02:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:13.912 13:02:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:13.912 13:02:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:13.912 13:02:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:13.912 13:02:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:13.912 13:02:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:13.912 13:02:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:13.912 13:02:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:13.912 13:02:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:13.912 13:02:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:13.912 13:02:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:13.912 13:02:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:13.912 13:02:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:13.912 13:02:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:13.912 13:02:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:13.912 13:02:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:13.912 13:02:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:13.912 13:02:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:13.912 13:02:02 -- common/autotest_common.sh@1543 -- # continue 00:08:13.912 13:02:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:13.912 13:02:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:13.912 13:02:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:08:13.912 13:02:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:13.912 13:02:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:13.912 13:02:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:13.912 13:02:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:13.912 13:02:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:13.912 13:02:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:13.912 13:02:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:13.912 13:02:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:13.912 13:02:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:13.912 13:02:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:13.912 13:02:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:13.912 13:02:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:13.912 13:02:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:13.912 13:02:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:13.912 13:02:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:13.912 13:02:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:13.912 13:02:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:13.912 13:02:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:13.912 13:02:02 -- common/autotest_common.sh@1543 -- # continue 00:08:13.912 13:02:02 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:13.912 13:02:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:13.912 13:02:02 -- common/autotest_common.sh@10 -- # set +x 00:08:14.170 13:02:02 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:14.170 13:02:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:14.170 13:02:02 -- common/autotest_common.sh@10 -- # set +x 00:08:14.170 13:02:02 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:14.736 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:14.736 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:14.994 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:14.994 13:02:03 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:14.994 13:02:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:14.994 13:02:03 -- common/autotest_common.sh@10 -- # set +x 00:08:14.994 13:02:03 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:14.994 13:02:03 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:14.994 13:02:03 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:14.994 13:02:03 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:14.994 13:02:03 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:14.994 13:02:03 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:14.994 13:02:03 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:14.994 13:02:03 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:14.994 13:02:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:14.994 13:02:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:14.994 13:02:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:14.994 13:02:03 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:14.994 13:02:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:14.994 13:02:03 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:14.994 13:02:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:14.994 13:02:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:14.994 13:02:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:14.994 13:02:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:14.994 13:02:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:14.994 13:02:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:14.994 13:02:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:14.994 13:02:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:14.994 13:02:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:14.994 13:02:03 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:14.994 13:02:03 -- common/autotest_common.sh@1572 -- # return 0 00:08:14.994 13:02:03 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:14.994 13:02:03 -- common/autotest_common.sh@1580 -- # return 0 00:08:14.994 13:02:03 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:14.994 13:02:03 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:14.994 13:02:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:14.994 13:02:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:14.994 13:02:03 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:14.994 13:02:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:14.994 13:02:03 -- common/autotest_common.sh@10 -- # set +x 00:08:14.994 13:02:03 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:14.994 13:02:03 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:14.994 13:02:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.994 13:02:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.994 13:02:03 -- common/autotest_common.sh@10 -- # set +x 00:08:14.994 ************************************ 00:08:14.994 START TEST env 00:08:14.994 ************************************ 00:08:14.994 13:02:03 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:15.253 * Looking for test storage... 00:08:15.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:15.253 13:02:03 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:15.253 13:02:03 env -- common/autotest_common.sh@1711 -- # lcov --version 00:08:15.253 13:02:03 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:15.253 13:02:03 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:15.253 13:02:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.253 13:02:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.253 13:02:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.253 13:02:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.253 13:02:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.253 13:02:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.253 13:02:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.253 13:02:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.253 13:02:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.253 13:02:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.253 13:02:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.253 13:02:03 env -- scripts/common.sh@344 -- # case "$op" in 00:08:15.253 13:02:03 env -- scripts/common.sh@345 -- # : 1 00:08:15.253 13:02:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.253 13:02:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.253 13:02:03 env -- scripts/common.sh@365 -- # decimal 1 00:08:15.253 13:02:03 env -- scripts/common.sh@353 -- # local d=1 00:08:15.253 13:02:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.253 13:02:03 env -- scripts/common.sh@355 -- # echo 1 00:08:15.253 13:02:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.253 13:02:03 env -- scripts/common.sh@366 -- # decimal 2 00:08:15.253 13:02:03 env -- scripts/common.sh@353 -- # local d=2 00:08:15.253 13:02:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.253 13:02:03 env -- scripts/common.sh@355 -- # echo 2 00:08:15.253 13:02:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.253 13:02:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.253 13:02:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.253 13:02:03 env -- scripts/common.sh@368 -- # return 0 00:08:15.253 13:02:03 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.253 13:02:03 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:15.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.253 --rc genhtml_branch_coverage=1 00:08:15.253 --rc genhtml_function_coverage=1 00:08:15.253 --rc genhtml_legend=1 00:08:15.253 --rc geninfo_all_blocks=1 00:08:15.253 --rc geninfo_unexecuted_blocks=1 00:08:15.253 00:08:15.253 ' 00:08:15.253 13:02:03 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:15.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.253 --rc genhtml_branch_coverage=1 00:08:15.253 --rc genhtml_function_coverage=1 00:08:15.253 --rc genhtml_legend=1 00:08:15.253 --rc geninfo_all_blocks=1 00:08:15.253 --rc geninfo_unexecuted_blocks=1 00:08:15.253 00:08:15.253 ' 00:08:15.253 13:02:03 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:15.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.253 --rc genhtml_branch_coverage=1 00:08:15.253 --rc genhtml_function_coverage=1 00:08:15.253 --rc genhtml_legend=1 00:08:15.253 --rc geninfo_all_blocks=1 00:08:15.253 --rc geninfo_unexecuted_blocks=1 00:08:15.253 00:08:15.253 ' 00:08:15.253 13:02:03 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:15.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.253 --rc genhtml_branch_coverage=1 00:08:15.253 --rc genhtml_function_coverage=1 00:08:15.253 --rc genhtml_legend=1 00:08:15.253 --rc geninfo_all_blocks=1 00:08:15.253 --rc geninfo_unexecuted_blocks=1 00:08:15.253 00:08:15.253 ' 00:08:15.253 13:02:03 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:15.253 13:02:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.253 13:02:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.253 13:02:03 env -- common/autotest_common.sh@10 -- # set +x 00:08:15.253 ************************************ 00:08:15.253 START TEST env_memory 00:08:15.253 ************************************ 00:08:15.253 13:02:03 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:15.253 00:08:15.253 00:08:15.253 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.253 http://cunit.sourceforge.net/ 00:08:15.253 00:08:15.253 00:08:15.253 Suite: memory 00:08:15.253 Test: alloc and free memory map ...[2024-12-06 13:02:03.877992] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:15.540 passed 00:08:15.540 Test: mem map translation ...[2024-12-06 13:02:03.908247] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:15.540 [2024-12-06 13:02:03.908690] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:15.540 [2024-12-06 13:02:03.908959] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:15.540 [2024-12-06 13:02:03.909164] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:15.540 passed 00:08:15.540 Test: mem map registration ...[2024-12-06 13:02:03.967212] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:15.540 [2024-12-06 13:02:03.967703] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:15.540 passed 00:08:15.540 Test: mem map adjacent registrations ...passed 00:08:15.540 00:08:15.540 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.540 suites 1 1 n/a 0 0 00:08:15.540 tests 4 4 4 0 0 00:08:15.540 asserts 152 152 152 0 n/a 00:08:15.540 00:08:15.540 Elapsed time = 0.195 seconds 00:08:15.540 00:08:15.540 real 0m0.220s 00:08:15.540 user 0m0.191s 00:08:15.540 sys 0m0.019s 00:08:15.540 ************************************ 00:08:15.540 END TEST env_memory 00:08:15.540 ************************************ 00:08:15.540 13:02:04 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.540 13:02:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:15.540 13:02:04 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:15.540 13:02:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.540 13:02:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.540 13:02:04 env -- common/autotest_common.sh@10 -- # set +x 00:08:15.540 ************************************ 00:08:15.540 START TEST env_vtophys 00:08:15.540 ************************************ 00:08:15.540 13:02:04 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:15.540 EAL: lib.eal log level changed from notice to debug 00:08:15.540 EAL: Detected lcore 0 as core 0 on socket 0 00:08:15.540 EAL: Detected lcore 1 as core 0 on socket 0 00:08:15.540 EAL: Detected lcore 2 as core 0 on socket 0 00:08:15.540 EAL: Detected lcore 3 as core 0 on socket 0 00:08:15.540 EAL: Detected lcore 4 as core 0 on socket 0 00:08:15.540 EAL: Detected lcore 5 as core 0 on socket 0 00:08:15.540 EAL: Detected lcore 6 as core 0 on socket 0 00:08:15.540 EAL: Detected lcore 7 as core 0 on socket 0 00:08:15.540 EAL: Detected lcore 8 as core 0 on socket 0 00:08:15.540 EAL: Detected lcore 9 as core 0 on socket 0 00:08:15.540 EAL: Maximum logical cores by configuration: 128 00:08:15.540 EAL: Detected CPU lcores: 10 00:08:15.540 EAL: Detected NUMA nodes: 1 00:08:15.540 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:15.540 EAL: Detected shared linkage of DPDK 00:08:15.540 EAL: No shared files mode enabled, IPC will be disabled 00:08:15.540 EAL: Selected IOVA mode 'PA' 00:08:15.540 EAL: Probing VFIO support... 00:08:15.540 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:15.540 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:15.540 EAL: Ask a virtual area of 0x2e000 bytes 00:08:15.540 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:15.540 EAL: Setting up physically contiguous memory... 00:08:15.540 EAL: Setting maximum number of open files to 524288 00:08:15.540 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:15.540 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:15.540 EAL: Ask a virtual area of 0x61000 bytes 00:08:15.540 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:15.540 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:15.540 EAL: Ask a virtual area of 0x400000000 bytes 00:08:15.540 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:15.540 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:15.540 EAL: Ask a virtual area of 0x61000 bytes 00:08:15.540 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:15.540 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:15.540 EAL: Ask a virtual area of 0x400000000 bytes 00:08:15.540 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:15.540 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:15.540 EAL: Ask a virtual area of 0x61000 bytes 00:08:15.540 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:15.540 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:15.540 EAL: Ask a virtual area of 0x400000000 bytes 00:08:15.540 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:15.540 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:15.540 EAL: Ask a virtual area of 0x61000 bytes 00:08:15.540 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:15.540 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:15.540 EAL: Ask a virtual area of 0x400000000 bytes 00:08:15.540 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:15.540 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:15.540 EAL: Hugepages will be freed exactly as allocated. 00:08:15.540 EAL: No shared files mode enabled, IPC is disabled 00:08:15.541 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: TSC frequency is ~2200000 KHz 00:08:15.836 EAL: Main lcore 0 is ready (tid=7fc8adb0fa00;cpuset=[0]) 00:08:15.836 EAL: Trying to obtain current memory policy. 00:08:15.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.836 EAL: Restoring previous memory policy: 0 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.836 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: Heap on socket 0 was expanded by 2MB 00:08:15.836 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:15.836 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:15.836 EAL: Mem event callback 'spdk:(nil)' registered 00:08:15.836 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:15.836 00:08:15.836 00:08:15.836 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.836 http://cunit.sourceforge.net/ 00:08:15.836 00:08:15.836 00:08:15.836 Suite: components_suite 00:08:15.836 Test: vtophys_malloc_test ...passed 00:08:15.836 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:15.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.836 EAL: Restoring previous memory policy: 4 00:08:15.836 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.836 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: Heap on socket 0 was expanded by 4MB 00:08:15.836 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.836 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: Heap on socket 0 was shrunk by 4MB 00:08:15.836 EAL: Trying to obtain current memory policy. 00:08:15.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.836 EAL: Restoring previous memory policy: 4 00:08:15.836 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.836 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: Heap on socket 0 was expanded by 6MB 00:08:15.836 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.836 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: Heap on socket 0 was shrunk by 6MB 00:08:15.836 EAL: Trying to obtain current memory policy. 00:08:15.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.836 EAL: Restoring previous memory policy: 4 00:08:15.836 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.836 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: Heap on socket 0 was expanded by 10MB 00:08:15.836 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.836 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: Heap on socket 0 was shrunk by 10MB 00:08:15.836 EAL: Trying to obtain current memory policy. 00:08:15.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.836 EAL: Restoring previous memory policy: 4 00:08:15.836 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.836 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: Heap on socket 0 was expanded by 18MB 00:08:15.836 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.836 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: Heap on socket 0 was shrunk by 18MB 00:08:15.836 EAL: Trying to obtain current memory policy. 00:08:15.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.836 EAL: Restoring previous memory policy: 4 00:08:15.836 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.836 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: Heap on socket 0 was expanded by 34MB 00:08:15.836 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.836 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: Heap on socket 0 was shrunk by 34MB 00:08:15.836 EAL: Trying to obtain current memory policy. 00:08:15.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.836 EAL: Restoring previous memory policy: 4 00:08:15.836 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.836 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: Heap on socket 0 was expanded by 66MB 00:08:15.836 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.836 EAL: No shared files mode enabled, IPC is disabled 00:08:15.836 EAL: Heap on socket 0 was shrunk by 66MB 00:08:15.836 EAL: Trying to obtain current memory policy. 00:08:15.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.836 EAL: Restoring previous memory policy: 4 00:08:15.836 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.836 EAL: request: mp_malloc_sync 00:08:15.837 EAL: No shared files mode enabled, IPC is disabled 00:08:15.837 EAL: Heap on socket 0 was expanded by 130MB 00:08:15.837 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.837 EAL: request: mp_malloc_sync 00:08:15.837 EAL: No shared files mode enabled, IPC is disabled 00:08:15.837 EAL: Heap on socket 0 was shrunk by 130MB 00:08:15.837 EAL: Trying to obtain current memory policy. 00:08:15.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.093 EAL: Restoring previous memory policy: 4 00:08:16.093 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.093 EAL: request: mp_malloc_sync 00:08:16.093 EAL: No shared files mode enabled, IPC is disabled 00:08:16.093 EAL: Heap on socket 0 was expanded by 258MB 00:08:16.093 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.093 EAL: request: mp_malloc_sync 00:08:16.093 EAL: No shared files mode enabled, IPC is disabled 00:08:16.093 EAL: Heap on socket 0 was shrunk by 258MB 00:08:16.093 EAL: Trying to obtain current memory policy. 00:08:16.093 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.350 EAL: Restoring previous memory policy: 4 00:08:16.350 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.350 EAL: request: mp_malloc_sync 00:08:16.350 EAL: No shared files mode enabled, IPC is disabled 00:08:16.350 EAL: Heap on socket 0 was expanded by 514MB 00:08:16.350 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.350 EAL: request: mp_malloc_sync 00:08:16.350 EAL: No shared files mode enabled, IPC is disabled 00:08:16.350 EAL: Heap on socket 0 was shrunk by 514MB 00:08:16.350 EAL: Trying to obtain current memory policy. 00:08:16.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.613 EAL: Restoring previous memory policy: 4 00:08:16.613 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.613 EAL: request: mp_malloc_sync 00:08:16.613 EAL: No shared files mode enabled, IPC is disabled 00:08:16.613 EAL: Heap on socket 0 was expanded by 1026MB 00:08:16.870 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.128 passed 00:08:17.128 00:08:17.128 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.128 suites 1 1 n/a 0 0 00:08:17.128 tests 2 2 2 0 0 00:08:17.128 asserts 5463 5463 5463 0 n/a 00:08:17.128 00:08:17.128 Elapsed time = 1.279 seconds 00:08:17.128 EAL: request: mp_malloc_sync 00:08:17.128 EAL: No shared files mode enabled, IPC is disabled 00:08:17.128 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:17.128 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.128 EAL: request: mp_malloc_sync 00:08:17.128 EAL: No shared files mode enabled, IPC is disabled 00:08:17.128 EAL: Heap on socket 0 was shrunk by 2MB 00:08:17.128 EAL: No shared files mode enabled, IPC is disabled 00:08:17.128 EAL: No shared files mode enabled, IPC is disabled 00:08:17.128 EAL: No shared files mode enabled, IPC is disabled 00:08:17.128 00:08:17.128 real 0m1.504s 00:08:17.128 user 0m0.800s 00:08:17.128 sys 0m0.561s 00:08:17.128 ************************************ 00:08:17.128 END TEST env_vtophys 00:08:17.128 ************************************ 00:08:17.128 13:02:05 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.128 13:02:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:17.128 13:02:05 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:17.128 13:02:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.128 13:02:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.128 13:02:05 env -- common/autotest_common.sh@10 -- # set +x 00:08:17.128 ************************************ 00:08:17.128 START TEST env_pci 00:08:17.128 ************************************ 00:08:17.128 13:02:05 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:17.128 00:08:17.128 00:08:17.128 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.128 http://cunit.sourceforge.net/ 00:08:17.128 00:08:17.128 00:08:17.128 Suite: pci 00:08:17.128 Test: pci_hook ...[2024-12-06 13:02:05.665847] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58607 has claimed it 00:08:17.128 passed 00:08:17.128 00:08:17.128 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.128 suites 1 1 n/a 0 0 00:08:17.128 tests 1 1 1 0 0 00:08:17.128 asserts 25 25 25 0 n/a 00:08:17.128 00:08:17.128 Elapsed time = 0.003 seconds 00:08:17.128 EAL: Cannot find device (10000:00:01.0) 00:08:17.128 EAL: Failed to attach device on primary process 00:08:17.128 00:08:17.128 real 0m0.021s 00:08:17.128 user 0m0.009s 00:08:17.128 sys 0m0.011s 00:08:17.128 13:02:05 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.128 ************************************ 00:08:17.128 END TEST env_pci 00:08:17.128 ************************************ 00:08:17.128 13:02:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:17.128 13:02:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:17.128 13:02:05 env -- env/env.sh@15 -- # uname 00:08:17.128 13:02:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:17.128 13:02:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:17.128 13:02:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:17.128 13:02:05 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:17.128 13:02:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.128 13:02:05 env -- common/autotest_common.sh@10 -- # set +x 00:08:17.128 ************************************ 00:08:17.128 START TEST env_dpdk_post_init 00:08:17.128 ************************************ 00:08:17.128 13:02:05 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:17.128 EAL: Detected CPU lcores: 10 00:08:17.128 EAL: Detected NUMA nodes: 1 00:08:17.128 EAL: Detected shared linkage of DPDK 00:08:17.384 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:17.384 EAL: Selected IOVA mode 'PA' 00:08:17.384 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:17.384 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:17.384 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:17.384 Starting DPDK initialization... 00:08:17.384 Starting SPDK post initialization... 00:08:17.384 SPDK NVMe probe 00:08:17.384 Attaching to 0000:00:10.0 00:08:17.384 Attaching to 0000:00:11.0 00:08:17.384 Attached to 0000:00:10.0 00:08:17.384 Attached to 0000:00:11.0 00:08:17.384 Cleaning up... 00:08:17.384 00:08:17.384 real 0m0.202s 00:08:17.384 user 0m0.059s 00:08:17.384 sys 0m0.042s 00:08:17.384 ************************************ 00:08:17.384 END TEST env_dpdk_post_init 00:08:17.384 ************************************ 00:08:17.384 13:02:05 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.384 13:02:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:17.384 13:02:05 env -- env/env.sh@26 -- # uname 00:08:17.384 13:02:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:17.384 13:02:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:17.384 13:02:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.384 13:02:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.384 13:02:05 env -- common/autotest_common.sh@10 -- # set +x 00:08:17.384 ************************************ 00:08:17.384 START TEST env_mem_callbacks 00:08:17.384 ************************************ 00:08:17.384 13:02:05 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:17.384 EAL: Detected CPU lcores: 10 00:08:17.384 EAL: Detected NUMA nodes: 1 00:08:17.384 EAL: Detected shared linkage of DPDK 00:08:17.384 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:17.384 EAL: Selected IOVA mode 'PA' 00:08:17.641 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:17.641 00:08:17.641 00:08:17.641 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.641 http://cunit.sourceforge.net/ 00:08:17.641 00:08:17.641 00:08:17.641 Suite: memory 00:08:17.641 Test: test ... 00:08:17.641 register 0x200000200000 2097152 00:08:17.641 malloc 3145728 00:08:17.641 register 0x200000400000 4194304 00:08:17.641 buf 0x200000500000 len 3145728 PASSED 00:08:17.641 malloc 64 00:08:17.641 buf 0x2000004fff40 len 64 PASSED 00:08:17.641 malloc 4194304 00:08:17.641 register 0x200000800000 6291456 00:08:17.641 buf 0x200000a00000 len 4194304 PASSED 00:08:17.641 free 0x200000500000 3145728 00:08:17.641 free 0x2000004fff40 64 00:08:17.641 unregister 0x200000400000 4194304 PASSED 00:08:17.641 free 0x200000a00000 4194304 00:08:17.641 unregister 0x200000800000 6291456 PASSED 00:08:17.641 malloc 8388608 00:08:17.641 register 0x200000400000 10485760 00:08:17.641 buf 0x200000600000 len 8388608 PASSED 00:08:17.641 free 0x200000600000 8388608 00:08:17.641 unregister 0x200000400000 10485760 PASSED 00:08:17.641 passed 00:08:17.641 00:08:17.641 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.641 suites 1 1 n/a 0 0 00:08:17.641 tests 1 1 1 0 0 00:08:17.641 asserts 15 15 15 0 n/a 00:08:17.641 00:08:17.641 Elapsed time = 0.008 seconds 00:08:17.641 00:08:17.641 real 0m0.150s 00:08:17.641 user 0m0.019s 00:08:17.641 sys 0m0.027s 00:08:17.641 13:02:06 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.641 13:02:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:17.641 ************************************ 00:08:17.641 END TEST env_mem_callbacks 00:08:17.641 ************************************ 00:08:17.641 00:08:17.641 real 0m2.572s 00:08:17.641 user 0m1.282s 00:08:17.641 sys 0m0.922s 00:08:17.641 13:02:06 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.641 13:02:06 env -- common/autotest_common.sh@10 -- # set +x 00:08:17.641 ************************************ 00:08:17.641 END TEST env 00:08:17.641 ************************************ 00:08:17.641 13:02:06 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:17.641 13:02:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.641 13:02:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.641 13:02:06 -- common/autotest_common.sh@10 -- # set +x 00:08:17.641 ************************************ 00:08:17.641 START TEST rpc 00:08:17.641 ************************************ 00:08:17.641 13:02:06 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:17.899 * Looking for test storage... 00:08:17.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:17.899 13:02:06 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.899 13:02:06 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.899 13:02:06 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.899 13:02:06 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.899 13:02:06 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.899 13:02:06 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.899 13:02:06 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.899 13:02:06 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.899 13:02:06 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.899 13:02:06 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.899 13:02:06 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.899 13:02:06 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:17.899 13:02:06 rpc -- scripts/common.sh@345 -- # : 1 00:08:17.899 13:02:06 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.899 13:02:06 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.899 13:02:06 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:17.899 13:02:06 rpc -- scripts/common.sh@353 -- # local d=1 00:08:17.899 13:02:06 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.899 13:02:06 rpc -- scripts/common.sh@355 -- # echo 1 00:08:17.899 13:02:06 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.899 13:02:06 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:17.899 13:02:06 rpc -- scripts/common.sh@353 -- # local d=2 00:08:17.899 13:02:06 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.899 13:02:06 rpc -- scripts/common.sh@355 -- # echo 2 00:08:17.899 13:02:06 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.899 13:02:06 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.899 13:02:06 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.899 13:02:06 rpc -- scripts/common.sh@368 -- # return 0 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:17.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.899 --rc genhtml_branch_coverage=1 00:08:17.899 --rc genhtml_function_coverage=1 00:08:17.899 --rc genhtml_legend=1 00:08:17.899 --rc geninfo_all_blocks=1 00:08:17.899 --rc geninfo_unexecuted_blocks=1 00:08:17.899 00:08:17.899 ' 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:17.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.899 --rc genhtml_branch_coverage=1 00:08:17.899 --rc genhtml_function_coverage=1 00:08:17.899 --rc genhtml_legend=1 00:08:17.899 --rc geninfo_all_blocks=1 00:08:17.899 --rc geninfo_unexecuted_blocks=1 00:08:17.899 00:08:17.899 ' 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:17.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.899 --rc genhtml_branch_coverage=1 00:08:17.899 --rc genhtml_function_coverage=1 00:08:17.899 --rc genhtml_legend=1 00:08:17.899 --rc geninfo_all_blocks=1 00:08:17.899 --rc geninfo_unexecuted_blocks=1 00:08:17.899 00:08:17.899 ' 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:17.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.899 --rc genhtml_branch_coverage=1 00:08:17.899 --rc genhtml_function_coverage=1 00:08:17.899 --rc genhtml_legend=1 00:08:17.899 --rc geninfo_all_blocks=1 00:08:17.899 --rc geninfo_unexecuted_blocks=1 00:08:17.899 00:08:17.899 ' 00:08:17.899 13:02:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58730 00:08:17.899 13:02:06 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:17.899 13:02:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:17.899 13:02:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58730 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@835 -- # '[' -z 58730 ']' 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.899 13:02:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.899 [2024-12-06 13:02:06.507672] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:17.899 [2024-12-06 13:02:06.507782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58730 ] 00:08:18.156 [2024-12-06 13:02:06.648175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.156 [2024-12-06 13:02:06.724701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:18.156 [2024-12-06 13:02:06.724768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58730' to capture a snapshot of events at runtime. 00:08:18.156 [2024-12-06 13:02:06.724781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.156 [2024-12-06 13:02:06.724790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.156 [2024-12-06 13:02:06.724798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58730 for offline analysis/debug. 00:08:18.156 [2024-12-06 13:02:06.725276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.101 13:02:07 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.101 13:02:07 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:19.101 13:02:07 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:19.101 13:02:07 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:19.101 13:02:07 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:19.101 13:02:07 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:19.101 13:02:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.101 13:02:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.101 13:02:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.101 ************************************ 00:08:19.101 START TEST rpc_integrity 00:08:19.101 ************************************ 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:19.101 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.101 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:19.101 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:19.101 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:19.101 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.101 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:19.101 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.101 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:19.101 { 00:08:19.101 "aliases": [ 00:08:19.101 "16665dd5-6903-4b5c-b56b-fcc5e77b6934" 00:08:19.101 ], 00:08:19.101 "assigned_rate_limits": { 00:08:19.101 "r_mbytes_per_sec": 0, 00:08:19.101 "rw_ios_per_sec": 0, 00:08:19.101 "rw_mbytes_per_sec": 0, 00:08:19.101 "w_mbytes_per_sec": 0 00:08:19.101 }, 00:08:19.101 "block_size": 512, 00:08:19.101 "claimed": false, 00:08:19.101 "driver_specific": {}, 00:08:19.101 "memory_domains": [ 00:08:19.101 { 00:08:19.101 "dma_device_id": "system", 00:08:19.101 "dma_device_type": 1 00:08:19.101 }, 00:08:19.101 { 00:08:19.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.101 "dma_device_type": 2 00:08:19.101 } 00:08:19.101 ], 00:08:19.101 "name": "Malloc0", 00:08:19.101 "num_blocks": 16384, 00:08:19.101 "product_name": "Malloc disk", 00:08:19.101 "supported_io_types": { 00:08:19.101 "abort": true, 00:08:19.101 "compare": false, 00:08:19.101 "compare_and_write": false, 00:08:19.101 "copy": true, 00:08:19.101 "flush": true, 00:08:19.101 "get_zone_info": false, 00:08:19.101 "nvme_admin": false, 00:08:19.101 "nvme_io": false, 00:08:19.101 "nvme_io_md": false, 00:08:19.101 "nvme_iov_md": false, 00:08:19.101 "read": true, 00:08:19.101 "reset": true, 00:08:19.101 "seek_data": false, 00:08:19.101 "seek_hole": false, 00:08:19.101 "unmap": true, 00:08:19.101 "write": true, 00:08:19.101 "write_zeroes": true, 00:08:19.101 "zcopy": true, 00:08:19.101 "zone_append": false, 00:08:19.101 "zone_management": false 00:08:19.101 }, 00:08:19.101 "uuid": "16665dd5-6903-4b5c-b56b-fcc5e77b6934", 00:08:19.101 "zoned": false 00:08:19.101 } 00:08:19.101 ]' 00:08:19.101 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:19.101 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:19.101 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.101 [2024-12-06 13:02:07.715535] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:19.101 [2024-12-06 13:02:07.715605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.101 [2024-12-06 13:02:07.715657] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19903e0 00:08:19.101 [2024-12-06 13:02:07.715675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.101 [2024-12-06 13:02:07.717441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.101 [2024-12-06 13:02:07.717496] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:19.101 Passthru0 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.101 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.101 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.358 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.358 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:19.358 { 00:08:19.358 "aliases": [ 00:08:19.358 "16665dd5-6903-4b5c-b56b-fcc5e77b6934" 00:08:19.358 ], 00:08:19.358 "assigned_rate_limits": { 00:08:19.358 "r_mbytes_per_sec": 0, 00:08:19.358 "rw_ios_per_sec": 0, 00:08:19.359 "rw_mbytes_per_sec": 0, 00:08:19.359 "w_mbytes_per_sec": 0 00:08:19.359 }, 00:08:19.359 "block_size": 512, 00:08:19.359 "claim_type": "exclusive_write", 00:08:19.359 "claimed": true, 00:08:19.359 "driver_specific": {}, 00:08:19.359 "memory_domains": [ 00:08:19.359 { 00:08:19.359 "dma_device_id": "system", 00:08:19.359 "dma_device_type": 1 00:08:19.359 }, 00:08:19.359 { 00:08:19.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.359 "dma_device_type": 2 00:08:19.359 } 00:08:19.359 ], 00:08:19.359 "name": "Malloc0", 00:08:19.359 "num_blocks": 16384, 00:08:19.359 "product_name": "Malloc disk", 00:08:19.359 "supported_io_types": { 00:08:19.359 "abort": true, 00:08:19.359 "compare": false, 00:08:19.359 "compare_and_write": false, 00:08:19.359 "copy": true, 00:08:19.359 "flush": true, 00:08:19.359 "get_zone_info": false, 00:08:19.359 "nvme_admin": false, 00:08:19.359 "nvme_io": false, 00:08:19.359 "nvme_io_md": false, 00:08:19.359 "nvme_iov_md": false, 00:08:19.359 "read": true, 00:08:19.359 "reset": true, 00:08:19.359 "seek_data": false, 00:08:19.359 "seek_hole": false, 00:08:19.359 "unmap": true, 00:08:19.359 "write": true, 00:08:19.359 "write_zeroes": true, 00:08:19.359 "zcopy": true, 00:08:19.359 "zone_append": false, 00:08:19.359 "zone_management": false 00:08:19.359 }, 00:08:19.359 "uuid": "16665dd5-6903-4b5c-b56b-fcc5e77b6934", 00:08:19.359 "zoned": false 00:08:19.359 }, 00:08:19.359 { 00:08:19.359 "aliases": [ 00:08:19.359 "1c5f8e12-4f3e-54ec-bb6c-906891783edd" 00:08:19.359 ], 00:08:19.359 "assigned_rate_limits": { 00:08:19.359 "r_mbytes_per_sec": 0, 00:08:19.359 "rw_ios_per_sec": 0, 00:08:19.359 "rw_mbytes_per_sec": 0, 00:08:19.359 "w_mbytes_per_sec": 0 00:08:19.359 }, 00:08:19.359 "block_size": 512, 00:08:19.359 "claimed": false, 00:08:19.359 "driver_specific": { 00:08:19.359 "passthru": { 00:08:19.359 "base_bdev_name": "Malloc0", 00:08:19.359 "name": "Passthru0" 00:08:19.359 } 00:08:19.359 }, 00:08:19.359 "memory_domains": [ 00:08:19.359 { 00:08:19.359 "dma_device_id": "system", 00:08:19.359 "dma_device_type": 1 00:08:19.359 }, 00:08:19.359 { 00:08:19.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.359 "dma_device_type": 2 00:08:19.359 } 00:08:19.359 ], 00:08:19.359 "name": "Passthru0", 00:08:19.359 "num_blocks": 16384, 00:08:19.359 "product_name": "passthru", 00:08:19.359 "supported_io_types": { 00:08:19.359 "abort": true, 00:08:19.359 "compare": false, 00:08:19.359 "compare_and_write": false, 00:08:19.359 "copy": true, 00:08:19.359 "flush": true, 00:08:19.359 "get_zone_info": false, 00:08:19.359 "nvme_admin": false, 00:08:19.359 "nvme_io": false, 00:08:19.359 "nvme_io_md": false, 00:08:19.359 "nvme_iov_md": false, 00:08:19.359 "read": true, 00:08:19.359 "reset": true, 00:08:19.359 "seek_data": false, 00:08:19.359 "seek_hole": false, 00:08:19.359 "unmap": true, 00:08:19.359 "write": true, 00:08:19.359 "write_zeroes": true, 00:08:19.359 "zcopy": true, 00:08:19.359 "zone_append": false, 00:08:19.359 "zone_management": false 00:08:19.359 }, 00:08:19.359 "uuid": "1c5f8e12-4f3e-54ec-bb6c-906891783edd", 00:08:19.359 "zoned": false 00:08:19.359 } 00:08:19.359 ]' 00:08:19.359 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:19.359 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:19.359 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:19.359 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.359 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.359 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.359 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:19.359 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.359 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.359 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.359 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:19.359 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.359 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.359 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.359 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:19.359 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:19.359 13:02:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:19.359 00:08:19.359 real 0m0.344s 00:08:19.359 user 0m0.229s 00:08:19.359 sys 0m0.042s 00:08:19.359 ************************************ 00:08:19.359 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.359 13:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.359 END TEST rpc_integrity 00:08:19.359 ************************************ 00:08:19.359 13:02:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:19.359 13:02:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.359 13:02:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.359 13:02:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.359 ************************************ 00:08:19.359 START TEST rpc_plugins 00:08:19.359 ************************************ 00:08:19.359 13:02:07 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:19.359 13:02:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:19.359 13:02:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.359 13:02:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.359 13:02:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.359 13:02:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:19.359 13:02:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:19.359 13:02:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.359 13:02:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.359 13:02:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.359 13:02:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:19.359 { 00:08:19.359 "aliases": [ 00:08:19.359 "8d060668-cd0e-4c2e-882e-70e904949cae" 00:08:19.359 ], 00:08:19.359 "assigned_rate_limits": { 00:08:19.359 "r_mbytes_per_sec": 0, 00:08:19.359 "rw_ios_per_sec": 0, 00:08:19.359 "rw_mbytes_per_sec": 0, 00:08:19.359 "w_mbytes_per_sec": 0 00:08:19.360 }, 00:08:19.360 "block_size": 4096, 00:08:19.360 "claimed": false, 00:08:19.360 "driver_specific": {}, 00:08:19.360 "memory_domains": [ 00:08:19.360 { 00:08:19.360 "dma_device_id": "system", 00:08:19.360 "dma_device_type": 1 00:08:19.360 }, 00:08:19.360 { 00:08:19.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.360 "dma_device_type": 2 00:08:19.360 } 00:08:19.360 ], 00:08:19.360 "name": "Malloc1", 00:08:19.360 "num_blocks": 256, 00:08:19.360 "product_name": "Malloc disk", 00:08:19.360 "supported_io_types": { 00:08:19.360 "abort": true, 00:08:19.360 "compare": false, 00:08:19.360 "compare_and_write": false, 00:08:19.360 "copy": true, 00:08:19.360 "flush": true, 00:08:19.360 "get_zone_info": false, 00:08:19.360 "nvme_admin": false, 00:08:19.360 "nvme_io": false, 00:08:19.360 "nvme_io_md": false, 00:08:19.360 "nvme_iov_md": false, 00:08:19.360 "read": true, 00:08:19.360 "reset": true, 00:08:19.360 "seek_data": false, 00:08:19.360 "seek_hole": false, 00:08:19.360 "unmap": true, 00:08:19.360 "write": true, 00:08:19.360 "write_zeroes": true, 00:08:19.360 "zcopy": true, 00:08:19.360 "zone_append": false, 00:08:19.360 "zone_management": false 00:08:19.360 }, 00:08:19.360 "uuid": "8d060668-cd0e-4c2e-882e-70e904949cae", 00:08:19.360 "zoned": false 00:08:19.360 } 00:08:19.360 ]' 00:08:19.360 13:02:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:19.617 13:02:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:19.617 13:02:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:19.617 13:02:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.617 13:02:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.617 13:02:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.617 13:02:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:19.617 13:02:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.617 13:02:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.617 13:02:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.617 13:02:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:19.617 13:02:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:19.617 13:02:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:19.617 00:08:19.617 real 0m0.159s 00:08:19.617 user 0m0.103s 00:08:19.617 sys 0m0.016s 00:08:19.617 ************************************ 00:08:19.617 13:02:08 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.617 13:02:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.617 END TEST rpc_plugins 00:08:19.617 ************************************ 00:08:19.617 13:02:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:19.617 13:02:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.617 13:02:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.617 13:02:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.617 ************************************ 00:08:19.617 START TEST rpc_trace_cmd_test 00:08:19.617 ************************************ 00:08:19.617 13:02:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:19.617 13:02:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:19.617 13:02:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:19.617 13:02:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.617 13:02:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.617 13:02:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.617 13:02:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:19.617 "bdev": { 00:08:19.617 "mask": "0x8", 00:08:19.617 "tpoint_mask": "0xffffffffffffffff" 00:08:19.617 }, 00:08:19.617 "bdev_nvme": { 00:08:19.617 "mask": "0x4000", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "bdev_raid": { 00:08:19.617 "mask": "0x20000", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "blob": { 00:08:19.617 "mask": "0x10000", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "blobfs": { 00:08:19.617 "mask": "0x80", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "dsa": { 00:08:19.617 "mask": "0x200", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "ftl": { 00:08:19.617 "mask": "0x40", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "iaa": { 00:08:19.617 "mask": "0x1000", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "iscsi_conn": { 00:08:19.617 "mask": "0x2", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "nvme_pcie": { 00:08:19.617 "mask": "0x800", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "nvme_tcp": { 00:08:19.617 "mask": "0x2000", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "nvmf_rdma": { 00:08:19.617 "mask": "0x10", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "nvmf_tcp": { 00:08:19.617 "mask": "0x20", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "scheduler": { 00:08:19.617 "mask": "0x40000", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "scsi": { 00:08:19.617 "mask": "0x4", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "sock": { 00:08:19.617 "mask": "0x8000", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "thread": { 00:08:19.617 "mask": "0x400", 00:08:19.617 "tpoint_mask": "0x0" 00:08:19.617 }, 00:08:19.617 "tpoint_group_mask": "0x8", 00:08:19.617 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58730" 00:08:19.617 }' 00:08:19.617 13:02:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:19.617 13:02:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:19.617 13:02:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:19.875 13:02:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:19.875 13:02:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:19.875 13:02:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:19.875 13:02:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:19.875 13:02:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:19.875 13:02:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:19.875 13:02:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:19.875 00:08:19.875 real 0m0.292s 00:08:19.875 user 0m0.255s 00:08:19.875 sys 0m0.029s 00:08:19.875 13:02:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.875 13:02:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.875 ************************************ 00:08:19.875 END TEST rpc_trace_cmd_test 00:08:19.875 ************************************ 00:08:19.875 13:02:08 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:08:19.875 13:02:08 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:08:19.875 13:02:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.875 13:02:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.875 13:02:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.875 ************************************ 00:08:19.875 START TEST go_rpc 00:08:19.875 ************************************ 00:08:19.875 13:02:08 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:08:19.875 13:02:08 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:08:19.875 13:02:08 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:08:19.875 13:02:08 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:08:20.132 13:02:08 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:08:20.132 13:02:08 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:08:20.132 13:02:08 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.132 13:02:08 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.132 13:02:08 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.132 13:02:08 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:08:20.132 13:02:08 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:08:20.132 13:02:08 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["bc49f70f-bf6d-4b19-80f8-836c37b5b0b8"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"bc49f70f-bf6d-4b19-80f8-836c37b5b0b8","zoned":false}]' 00:08:20.132 13:02:08 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:08:20.132 13:02:08 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:08:20.132 13:02:08 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:20.132 13:02:08 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.132 13:02:08 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.132 13:02:08 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.132 13:02:08 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:08:20.132 13:02:08 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:08:20.132 13:02:08 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:08:20.132 13:02:08 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:08:20.132 00:08:20.132 real 0m0.233s 00:08:20.132 user 0m0.166s 00:08:20.132 sys 0m0.035s 00:08:20.132 13:02:08 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.132 13:02:08 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.132 ************************************ 00:08:20.132 END TEST go_rpc 00:08:20.132 ************************************ 00:08:20.132 13:02:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:20.132 13:02:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:20.132 13:02:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.132 13:02:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.132 13:02:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.132 ************************************ 00:08:20.132 START TEST rpc_daemon_integrity 00:08:20.132 ************************************ 00:08:20.132 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:20.132 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:20.132 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.132 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.132 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.132 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:20.132 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:20.390 { 00:08:20.390 "aliases": [ 00:08:20.390 "579392b3-2e16-4980-a1f1-7780e888cfe7" 00:08:20.390 ], 00:08:20.390 "assigned_rate_limits": { 00:08:20.390 "r_mbytes_per_sec": 0, 00:08:20.390 "rw_ios_per_sec": 0, 00:08:20.390 "rw_mbytes_per_sec": 0, 00:08:20.390 "w_mbytes_per_sec": 0 00:08:20.390 }, 00:08:20.390 "block_size": 512, 00:08:20.390 "claimed": false, 00:08:20.390 "driver_specific": {}, 00:08:20.390 "memory_domains": [ 00:08:20.390 { 00:08:20.390 "dma_device_id": "system", 00:08:20.390 "dma_device_type": 1 00:08:20.390 }, 00:08:20.390 { 00:08:20.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.390 "dma_device_type": 2 00:08:20.390 } 00:08:20.390 ], 00:08:20.390 "name": "Malloc3", 00:08:20.390 "num_blocks": 16384, 00:08:20.390 "product_name": "Malloc disk", 00:08:20.390 "supported_io_types": { 00:08:20.390 "abort": true, 00:08:20.390 "compare": false, 00:08:20.390 "compare_and_write": false, 00:08:20.390 "copy": true, 00:08:20.390 "flush": true, 00:08:20.390 "get_zone_info": false, 00:08:20.390 "nvme_admin": false, 00:08:20.390 "nvme_io": false, 00:08:20.390 "nvme_io_md": false, 00:08:20.390 "nvme_iov_md": false, 00:08:20.390 "read": true, 00:08:20.390 "reset": true, 00:08:20.390 "seek_data": false, 00:08:20.390 "seek_hole": false, 00:08:20.390 "unmap": true, 00:08:20.390 "write": true, 00:08:20.390 "write_zeroes": true, 00:08:20.390 "zcopy": true, 00:08:20.390 "zone_append": false, 00:08:20.390 "zone_management": false 00:08:20.390 }, 00:08:20.390 "uuid": "579392b3-2e16-4980-a1f1-7780e888cfe7", 00:08:20.390 "zoned": false 00:08:20.390 } 00:08:20.390 ]' 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.390 [2024-12-06 13:02:08.921281] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:20.390 [2024-12-06 13:02:08.921352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.390 [2024-12-06 13:02:08.921380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x184f490 00:08:20.390 [2024-12-06 13:02:08.921393] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.390 [2024-12-06 13:02:08.923192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.390 [2024-12-06 13:02:08.923233] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:20.390 Passthru0 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.390 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:20.390 { 00:08:20.390 "aliases": [ 00:08:20.390 "579392b3-2e16-4980-a1f1-7780e888cfe7" 00:08:20.390 ], 00:08:20.390 "assigned_rate_limits": { 00:08:20.390 "r_mbytes_per_sec": 0, 00:08:20.390 "rw_ios_per_sec": 0, 00:08:20.390 "rw_mbytes_per_sec": 0, 00:08:20.390 "w_mbytes_per_sec": 0 00:08:20.390 }, 00:08:20.390 "block_size": 512, 00:08:20.390 "claim_type": "exclusive_write", 00:08:20.390 "claimed": true, 00:08:20.390 "driver_specific": {}, 00:08:20.390 "memory_domains": [ 00:08:20.390 { 00:08:20.390 "dma_device_id": "system", 00:08:20.390 "dma_device_type": 1 00:08:20.390 }, 00:08:20.390 { 00:08:20.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.390 "dma_device_type": 2 00:08:20.390 } 00:08:20.390 ], 00:08:20.390 "name": "Malloc3", 00:08:20.390 "num_blocks": 16384, 00:08:20.390 "product_name": "Malloc disk", 00:08:20.390 "supported_io_types": { 00:08:20.390 "abort": true, 00:08:20.390 "compare": false, 00:08:20.390 "compare_and_write": false, 00:08:20.390 "copy": true, 00:08:20.390 "flush": true, 00:08:20.390 "get_zone_info": false, 00:08:20.391 "nvme_admin": false, 00:08:20.391 "nvme_io": false, 00:08:20.391 "nvme_io_md": false, 00:08:20.391 "nvme_iov_md": false, 00:08:20.391 "read": true, 00:08:20.391 "reset": true, 00:08:20.391 "seek_data": false, 00:08:20.391 "seek_hole": false, 00:08:20.391 "unmap": true, 00:08:20.391 "write": true, 00:08:20.391 "write_zeroes": true, 00:08:20.391 "zcopy": true, 00:08:20.391 "zone_append": false, 00:08:20.391 "zone_management": false 00:08:20.391 }, 00:08:20.391 "uuid": "579392b3-2e16-4980-a1f1-7780e888cfe7", 00:08:20.391 "zoned": false 00:08:20.391 }, 00:08:20.391 { 00:08:20.391 "aliases": [ 00:08:20.391 "c6bacd34-ee64-5de1-a78d-eb519b76f1c4" 00:08:20.391 ], 00:08:20.391 "assigned_rate_limits": { 00:08:20.391 "r_mbytes_per_sec": 0, 00:08:20.391 "rw_ios_per_sec": 0, 00:08:20.391 "rw_mbytes_per_sec": 0, 00:08:20.391 "w_mbytes_per_sec": 0 00:08:20.391 }, 00:08:20.391 "block_size": 512, 00:08:20.391 "claimed": false, 00:08:20.391 "driver_specific": { 00:08:20.391 "passthru": { 00:08:20.391 "base_bdev_name": "Malloc3", 00:08:20.391 "name": "Passthru0" 00:08:20.391 } 00:08:20.391 }, 00:08:20.391 "memory_domains": [ 00:08:20.391 { 00:08:20.391 "dma_device_id": "system", 00:08:20.391 "dma_device_type": 1 00:08:20.391 }, 00:08:20.391 { 00:08:20.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.391 "dma_device_type": 2 00:08:20.391 } 00:08:20.391 ], 00:08:20.391 "name": "Passthru0", 00:08:20.391 "num_blocks": 16384, 00:08:20.391 "product_name": "passthru", 00:08:20.391 "supported_io_types": { 00:08:20.391 "abort": true, 00:08:20.391 "compare": false, 00:08:20.391 "compare_and_write": false, 00:08:20.391 "copy": true, 00:08:20.391 "flush": true, 00:08:20.391 "get_zone_info": false, 00:08:20.391 "nvme_admin": false, 00:08:20.391 "nvme_io": false, 00:08:20.391 "nvme_io_md": false, 00:08:20.391 "nvme_iov_md": false, 00:08:20.391 "read": true, 00:08:20.391 "reset": true, 00:08:20.391 "seek_data": false, 00:08:20.391 "seek_hole": false, 00:08:20.391 "unmap": true, 00:08:20.391 "write": true, 00:08:20.391 "write_zeroes": true, 00:08:20.391 "zcopy": true, 00:08:20.391 "zone_append": false, 00:08:20.391 "zone_management": false 00:08:20.391 }, 00:08:20.391 "uuid": "c6bacd34-ee64-5de1-a78d-eb519b76f1c4", 00:08:20.391 "zoned": false 00:08:20.391 } 00:08:20.391 ]' 00:08:20.391 13:02:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:20.391 13:02:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:20.391 13:02:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:20.391 13:02:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.391 13:02:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.391 13:02:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.391 13:02:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:08:20.391 13:02:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.391 13:02:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.649 13:02:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.649 13:02:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:20.649 13:02:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.649 13:02:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.649 13:02:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.649 13:02:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:20.649 13:02:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:20.649 13:02:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:20.649 00:08:20.649 real 0m0.347s 00:08:20.649 user 0m0.239s 00:08:20.649 sys 0m0.040s 00:08:20.649 13:02:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.649 13:02:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.649 ************************************ 00:08:20.649 END TEST rpc_daemon_integrity 00:08:20.649 ************************************ 00:08:20.649 13:02:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:20.649 13:02:09 rpc -- rpc/rpc.sh@84 -- # killprocess 58730 00:08:20.649 13:02:09 rpc -- common/autotest_common.sh@954 -- # '[' -z 58730 ']' 00:08:20.649 13:02:09 rpc -- common/autotest_common.sh@958 -- # kill -0 58730 00:08:20.649 13:02:09 rpc -- common/autotest_common.sh@959 -- # uname 00:08:20.649 13:02:09 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.649 13:02:09 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58730 00:08:20.649 13:02:09 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.649 13:02:09 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.649 13:02:09 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58730' 00:08:20.649 killing process with pid 58730 00:08:20.649 13:02:09 rpc -- common/autotest_common.sh@973 -- # kill 58730 00:08:20.649 13:02:09 rpc -- common/autotest_common.sh@978 -- # wait 58730 00:08:21.215 00:08:21.216 real 0m3.334s 00:08:21.216 user 0m4.364s 00:08:21.216 sys 0m0.838s 00:08:21.216 13:02:09 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.216 13:02:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.216 ************************************ 00:08:21.216 END TEST rpc 00:08:21.216 ************************************ 00:08:21.216 13:02:09 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:21.216 13:02:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.216 13:02:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.216 13:02:09 -- common/autotest_common.sh@10 -- # set +x 00:08:21.216 ************************************ 00:08:21.216 START TEST skip_rpc 00:08:21.216 ************************************ 00:08:21.216 13:02:09 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:21.216 * Looking for test storage... 00:08:21.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:21.216 13:02:09 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:21.216 13:02:09 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:21.216 13:02:09 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:21.216 13:02:09 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.216 13:02:09 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:21.216 13:02:09 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.216 13:02:09 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:21.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.216 --rc genhtml_branch_coverage=1 00:08:21.216 --rc genhtml_function_coverage=1 00:08:21.216 --rc genhtml_legend=1 00:08:21.216 --rc geninfo_all_blocks=1 00:08:21.216 --rc geninfo_unexecuted_blocks=1 00:08:21.216 00:08:21.216 ' 00:08:21.216 13:02:09 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:21.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.216 --rc genhtml_branch_coverage=1 00:08:21.216 --rc genhtml_function_coverage=1 00:08:21.216 --rc genhtml_legend=1 00:08:21.216 --rc geninfo_all_blocks=1 00:08:21.216 --rc geninfo_unexecuted_blocks=1 00:08:21.216 00:08:21.216 ' 00:08:21.216 13:02:09 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:21.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.216 --rc genhtml_branch_coverage=1 00:08:21.216 --rc genhtml_function_coverage=1 00:08:21.216 --rc genhtml_legend=1 00:08:21.216 --rc geninfo_all_blocks=1 00:08:21.216 --rc geninfo_unexecuted_blocks=1 00:08:21.216 00:08:21.216 ' 00:08:21.216 13:02:09 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:21.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.216 --rc genhtml_branch_coverage=1 00:08:21.216 --rc genhtml_function_coverage=1 00:08:21.216 --rc genhtml_legend=1 00:08:21.216 --rc geninfo_all_blocks=1 00:08:21.216 --rc geninfo_unexecuted_blocks=1 00:08:21.216 00:08:21.216 ' 00:08:21.216 13:02:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:21.216 13:02:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:21.216 13:02:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:21.216 13:02:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.216 13:02:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.216 13:02:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.216 ************************************ 00:08:21.216 START TEST skip_rpc 00:08:21.216 ************************************ 00:08:21.216 13:02:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:21.216 13:02:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59004 00:08:21.216 13:02:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:21.216 13:02:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:21.216 13:02:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:21.475 [2024-12-06 13:02:09.866405] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:21.475 [2024-12-06 13:02:09.866867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59004 ] 00:08:21.475 [2024-12-06 13:02:10.021499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.475 [2024-12-06 13:02:10.084703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.756 2024/12/06 13:02:14 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59004 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59004 ']' 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59004 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59004 00:08:26.756 killing process with pid 59004 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59004' 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59004 00:08:26.756 13:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59004 00:08:26.756 00:08:26.756 real 0m5.445s 00:08:26.756 user 0m5.066s 00:08:26.756 sys 0m0.303s 00:08:26.756 13:02:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.756 ************************************ 00:08:26.756 13:02:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.756 END TEST skip_rpc 00:08:26.756 ************************************ 00:08:26.756 13:02:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:26.756 13:02:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.756 13:02:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.756 13:02:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.756 ************************************ 00:08:26.756 START TEST skip_rpc_with_json 00:08:26.756 ************************************ 00:08:26.756 13:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:26.756 13:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:26.756 13:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59092 00:08:26.756 13:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:26.756 13:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:26.756 13:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59092 00:08:26.756 13:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59092 ']' 00:08:26.756 13:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.756 13:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.756 13:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.756 13:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.756 13:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:26.756 [2024-12-06 13:02:15.348095] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:26.756 [2024-12-06 13:02:15.348436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59092 ] 00:08:27.014 [2024-12-06 13:02:15.501945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.014 [2024-12-06 13:02:15.580072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:27.949 [2024-12-06 13:02:16.358965] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:27.949 2024/12/06 13:02:16 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:08:27.949 request: 00:08:27.949 { 00:08:27.949 "method": "nvmf_get_transports", 00:08:27.949 "params": { 00:08:27.949 "trtype": "tcp" 00:08:27.949 } 00:08:27.949 } 00:08:27.949 Got JSON-RPC error response 00:08:27.949 GoRPCClient: error on JSON-RPC call 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:27.949 [2024-12-06 13:02:16.371075] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.949 13:02:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:27.949 { 00:08:27.949 "subsystems": [ 00:08:27.949 { 00:08:27.949 "subsystem": "fsdev", 00:08:27.949 "config": [ 00:08:27.949 { 00:08:27.949 "method": "fsdev_set_opts", 00:08:27.949 "params": { 00:08:27.949 "fsdev_io_cache_size": 256, 00:08:27.949 "fsdev_io_pool_size": 65535 00:08:27.949 } 00:08:27.949 } 00:08:27.949 ] 00:08:27.949 }, 00:08:27.949 { 00:08:27.949 "subsystem": "keyring", 00:08:27.949 "config": [] 00:08:27.949 }, 00:08:27.949 { 00:08:27.949 "subsystem": "iobuf", 00:08:27.949 "config": [ 00:08:27.949 { 00:08:27.949 "method": "iobuf_set_options", 00:08:27.949 "params": { 00:08:27.949 "enable_numa": false, 00:08:27.949 "large_bufsize": 135168, 00:08:27.949 "large_pool_count": 1024, 00:08:27.949 "small_bufsize": 8192, 00:08:27.949 "small_pool_count": 8192 00:08:27.949 } 00:08:27.949 } 00:08:27.949 ] 00:08:27.949 }, 00:08:27.949 { 00:08:27.949 "subsystem": "sock", 00:08:27.949 "config": [ 00:08:27.949 { 00:08:27.949 "method": "sock_set_default_impl", 00:08:27.949 "params": { 00:08:27.949 "impl_name": "posix" 00:08:27.949 } 00:08:27.949 }, 00:08:27.949 { 00:08:27.949 "method": "sock_impl_set_options", 00:08:27.949 "params": { 00:08:27.949 "enable_ktls": false, 00:08:27.949 "enable_placement_id": 0, 00:08:27.949 "enable_quickack": false, 00:08:27.949 "enable_recv_pipe": true, 00:08:27.949 "enable_zerocopy_send_client": false, 00:08:27.949 "enable_zerocopy_send_server": true, 00:08:27.949 "impl_name": "ssl", 00:08:27.949 "recv_buf_size": 4096, 00:08:27.949 "send_buf_size": 4096, 00:08:27.949 "tls_version": 0, 00:08:27.949 "zerocopy_threshold": 0 00:08:27.949 } 00:08:27.949 }, 00:08:27.949 { 00:08:27.949 "method": "sock_impl_set_options", 00:08:27.949 "params": { 00:08:27.949 "enable_ktls": false, 00:08:27.949 "enable_placement_id": 0, 00:08:27.949 "enable_quickack": false, 00:08:27.949 "enable_recv_pipe": true, 00:08:27.949 "enable_zerocopy_send_client": false, 00:08:27.949 "enable_zerocopy_send_server": true, 00:08:27.949 "impl_name": "posix", 00:08:27.949 "recv_buf_size": 2097152, 00:08:27.949 "send_buf_size": 2097152, 00:08:27.949 "tls_version": 0, 00:08:27.949 "zerocopy_threshold": 0 00:08:27.949 } 00:08:27.949 } 00:08:27.949 ] 00:08:27.949 }, 00:08:27.949 { 00:08:27.949 "subsystem": "vmd", 00:08:27.949 "config": [] 00:08:27.949 }, 00:08:27.949 { 00:08:27.949 "subsystem": "accel", 00:08:27.949 "config": [ 00:08:27.949 { 00:08:27.949 "method": "accel_set_options", 00:08:27.949 "params": { 00:08:27.949 "buf_count": 2048, 00:08:27.949 "large_cache_size": 16, 00:08:27.949 "sequence_count": 2048, 00:08:27.949 "small_cache_size": 128, 00:08:27.949 "task_count": 2048 00:08:27.949 } 00:08:27.949 } 00:08:27.949 ] 00:08:27.949 }, 00:08:27.949 { 00:08:27.949 "subsystem": "bdev", 00:08:27.949 "config": [ 00:08:27.949 { 00:08:27.949 "method": "bdev_set_options", 00:08:27.949 "params": { 00:08:27.949 "bdev_auto_examine": true, 00:08:27.949 "bdev_io_cache_size": 256, 00:08:27.949 "bdev_io_pool_size": 65535, 00:08:27.949 "iobuf_large_cache_size": 16, 00:08:27.949 "iobuf_small_cache_size": 128 00:08:27.949 } 00:08:27.949 }, 00:08:27.949 { 00:08:27.949 "method": "bdev_raid_set_options", 00:08:27.949 "params": { 00:08:27.949 "process_max_bandwidth_mb_sec": 0, 00:08:27.949 "process_window_size_kb": 1024 00:08:27.949 } 00:08:27.949 }, 00:08:27.949 { 00:08:27.949 "method": "bdev_iscsi_set_options", 00:08:27.949 "params": { 00:08:27.949 "timeout_sec": 30 00:08:27.949 } 00:08:27.949 }, 00:08:27.949 { 00:08:27.949 "method": "bdev_nvme_set_options", 00:08:27.949 "params": { 00:08:27.949 "action_on_timeout": "none", 00:08:27.949 "allow_accel_sequence": false, 00:08:27.949 "arbitration_burst": 0, 00:08:27.949 "bdev_retry_count": 3, 00:08:27.949 "ctrlr_loss_timeout_sec": 0, 00:08:27.949 "delay_cmd_submit": true, 00:08:27.949 "dhchap_dhgroups": [ 00:08:27.949 "null", 00:08:27.949 "ffdhe2048", 00:08:27.949 "ffdhe3072", 00:08:27.950 "ffdhe4096", 00:08:27.950 "ffdhe6144", 00:08:27.950 "ffdhe8192" 00:08:27.950 ], 00:08:27.950 "dhchap_digests": [ 00:08:27.950 "sha256", 00:08:27.950 "sha384", 00:08:27.950 "sha512" 00:08:27.950 ], 00:08:27.950 "disable_auto_failback": false, 00:08:27.950 "fast_io_fail_timeout_sec": 0, 00:08:27.950 "generate_uuids": false, 00:08:27.950 "high_priority_weight": 0, 00:08:27.950 "io_path_stat": false, 00:08:27.950 "io_queue_requests": 0, 00:08:27.950 "keep_alive_timeout_ms": 10000, 00:08:27.950 "low_priority_weight": 0, 00:08:27.950 "medium_priority_weight": 0, 00:08:27.950 "nvme_adminq_poll_period_us": 10000, 00:08:27.950 "nvme_error_stat": false, 00:08:27.950 "nvme_ioq_poll_period_us": 0, 00:08:27.950 "rdma_cm_event_timeout_ms": 0, 00:08:27.950 "rdma_max_cq_size": 0, 00:08:27.950 "rdma_srq_size": 0, 00:08:27.950 "reconnect_delay_sec": 0, 00:08:27.950 "timeout_admin_us": 0, 00:08:27.950 "timeout_us": 0, 00:08:27.950 "transport_ack_timeout": 0, 00:08:27.950 "transport_retry_count": 4, 00:08:27.950 "transport_tos": 0 00:08:27.950 } 00:08:27.950 }, 00:08:27.950 { 00:08:27.950 "method": "bdev_nvme_set_hotplug", 00:08:27.950 "params": { 00:08:27.950 "enable": false, 00:08:27.950 "period_us": 100000 00:08:27.950 } 00:08:27.950 }, 00:08:27.950 { 00:08:27.950 "method": "bdev_wait_for_examine" 00:08:27.950 } 00:08:27.950 ] 00:08:27.950 }, 00:08:27.950 { 00:08:27.950 "subsystem": "scsi", 00:08:27.950 "config": null 00:08:27.950 }, 00:08:27.950 { 00:08:27.950 "subsystem": "scheduler", 00:08:27.950 "config": [ 00:08:27.950 { 00:08:27.950 "method": "framework_set_scheduler", 00:08:27.950 "params": { 00:08:27.950 "name": "static" 00:08:27.950 } 00:08:27.950 } 00:08:27.950 ] 00:08:27.950 }, 00:08:27.950 { 00:08:27.950 "subsystem": "vhost_scsi", 00:08:27.950 "config": [] 00:08:27.950 }, 00:08:27.950 { 00:08:27.950 "subsystem": "vhost_blk", 00:08:27.950 "config": [] 00:08:27.950 }, 00:08:27.950 { 00:08:27.950 "subsystem": "ublk", 00:08:27.950 "config": [] 00:08:27.950 }, 00:08:27.950 { 00:08:27.950 "subsystem": "nbd", 00:08:27.950 "config": [] 00:08:27.950 }, 00:08:27.950 { 00:08:27.950 "subsystem": "nvmf", 00:08:27.950 "config": [ 00:08:27.950 { 00:08:27.950 "method": "nvmf_set_config", 00:08:27.950 "params": { 00:08:27.950 "admin_cmd_passthru": { 00:08:27.950 "identify_ctrlr": false 00:08:27.950 }, 00:08:27.950 "dhchap_dhgroups": [ 00:08:27.950 "null", 00:08:27.950 "ffdhe2048", 00:08:27.950 "ffdhe3072", 00:08:27.950 "ffdhe4096", 00:08:27.950 "ffdhe6144", 00:08:27.950 "ffdhe8192" 00:08:27.950 ], 00:08:27.950 "dhchap_digests": [ 00:08:27.950 "sha256", 00:08:27.950 "sha384", 00:08:27.950 "sha512" 00:08:27.950 ], 00:08:27.950 "discovery_filter": "match_any" 00:08:27.950 } 00:08:27.950 }, 00:08:27.950 { 00:08:27.950 "method": "nvmf_set_max_subsystems", 00:08:27.950 "params": { 00:08:27.950 "max_subsystems": 1024 00:08:27.950 } 00:08:27.950 }, 00:08:27.950 { 00:08:27.950 "method": "nvmf_set_crdt", 00:08:27.950 "params": { 00:08:27.950 "crdt1": 0, 00:08:27.950 "crdt2": 0, 00:08:27.950 "crdt3": 0 00:08:27.950 } 00:08:27.950 }, 00:08:27.950 { 00:08:27.950 "method": "nvmf_create_transport", 00:08:27.950 "params": { 00:08:27.950 "abort_timeout_sec": 1, 00:08:27.950 "ack_timeout": 0, 00:08:27.950 "buf_cache_size": 4294967295, 00:08:27.950 "c2h_success": true, 00:08:27.950 "data_wr_pool_size": 0, 00:08:27.950 "dif_insert_or_strip": false, 00:08:27.950 "in_capsule_data_size": 4096, 00:08:27.950 "io_unit_size": 131072, 00:08:27.950 "max_aq_depth": 128, 00:08:27.950 "max_io_qpairs_per_ctrlr": 127, 00:08:27.950 "max_io_size": 131072, 00:08:27.950 "max_queue_depth": 128, 00:08:27.950 "num_shared_buffers": 511, 00:08:27.950 "sock_priority": 0, 00:08:27.950 "trtype": "TCP", 00:08:27.950 "zcopy": false 00:08:27.950 } 00:08:27.950 } 00:08:27.950 ] 00:08:27.950 }, 00:08:27.950 { 00:08:27.950 "subsystem": "iscsi", 00:08:27.950 "config": [ 00:08:27.950 { 00:08:27.950 "method": "iscsi_set_options", 00:08:27.950 "params": { 00:08:27.950 "allow_duplicated_isid": false, 00:08:27.950 "chap_group": 0, 00:08:27.950 "data_out_pool_size": 2048, 00:08:27.950 "default_time2retain": 20, 00:08:27.950 "default_time2wait": 2, 00:08:27.950 "disable_chap": false, 00:08:27.950 "error_recovery_level": 0, 00:08:27.950 "first_burst_length": 8192, 00:08:27.950 "immediate_data": true, 00:08:27.950 "immediate_data_pool_size": 16384, 00:08:27.950 "max_connections_per_session": 2, 00:08:27.950 "max_large_datain_per_connection": 64, 00:08:27.950 "max_queue_depth": 64, 00:08:27.950 "max_r2t_per_connection": 4, 00:08:27.950 "max_sessions": 128, 00:08:27.950 "mutual_chap": false, 00:08:27.950 "node_base": "iqn.2016-06.io.spdk", 00:08:27.950 "nop_in_interval": 30, 00:08:27.950 "nop_timeout": 60, 00:08:27.950 "pdu_pool_size": 36864, 00:08:27.950 "require_chap": false 00:08:27.950 } 00:08:27.950 } 00:08:27.950 ] 00:08:27.950 } 00:08:27.950 ] 00:08:27.950 } 00:08:27.950 13:02:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:27.950 13:02:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59092 00:08:27.950 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59092 ']' 00:08:27.950 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59092 00:08:27.950 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:27.950 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.950 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59092 00:08:28.209 killing process with pid 59092 00:08:28.209 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.209 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.209 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59092' 00:08:28.209 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59092 00:08:28.209 13:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59092 00:08:28.467 13:02:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59137 00:08:28.467 13:02:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:28.467 13:02:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:33.728 13:02:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59137 00:08:33.728 13:02:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59137 ']' 00:08:33.728 13:02:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59137 00:08:33.728 13:02:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:33.728 13:02:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.728 13:02:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59137 00:08:33.728 killing process with pid 59137 00:08:33.728 13:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.728 13:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.728 13:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59137' 00:08:33.728 13:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59137 00:08:33.728 13:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59137 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:33.986 00:08:33.986 real 0m7.128s 00:08:33.986 user 0m6.877s 00:08:33.986 sys 0m0.697s 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:33.986 ************************************ 00:08:33.986 END TEST skip_rpc_with_json 00:08:33.986 ************************************ 00:08:33.986 13:02:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:33.986 13:02:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.986 13:02:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.986 13:02:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.986 ************************************ 00:08:33.986 START TEST skip_rpc_with_delay 00:08:33.986 ************************************ 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:33.986 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:33.987 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:33.987 [2024-12-06 13:02:22.525243] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:33.987 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:33.987 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.987 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.987 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.987 00:08:33.987 real 0m0.083s 00:08:33.987 user 0m0.050s 00:08:33.987 sys 0m0.033s 00:08:33.987 ************************************ 00:08:33.987 END TEST skip_rpc_with_delay 00:08:33.987 ************************************ 00:08:33.987 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.987 13:02:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:33.987 13:02:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:33.987 13:02:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:33.987 13:02:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:33.987 13:02:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.987 13:02:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.987 13:02:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.987 ************************************ 00:08:33.987 START TEST exit_on_failed_rpc_init 00:08:33.987 ************************************ 00:08:33.987 13:02:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:33.987 13:02:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59241 00:08:33.987 13:02:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:33.987 13:02:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59241 00:08:33.987 13:02:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59241 ']' 00:08:33.987 13:02:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.987 13:02:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.987 13:02:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.987 13:02:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.987 13:02:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:34.250 [2024-12-06 13:02:22.658951] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:34.250 [2024-12-06 13:02:22.659307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59241 ] 00:08:34.250 [2024-12-06 13:02:22.808311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.250 [2024-12-06 13:02:22.879425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:34.816 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:34.816 [2024-12-06 13:02:23.243905] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:34.816 [2024-12-06 13:02:23.244012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59263 ] 00:08:34.816 [2024-12-06 13:02:23.387138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.816 [2024-12-06 13:02:23.452000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.816 [2024-12-06 13:02:23.452104] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:34.816 [2024-12-06 13:02:23.452121] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:34.816 [2024-12-06 13:02:23.452130] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59241 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59241 ']' 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59241 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59241 00:08:35.075 killing process with pid 59241 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59241' 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59241 00:08:35.075 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59241 00:08:35.335 00:08:35.335 real 0m1.355s 00:08:35.335 user 0m1.459s 00:08:35.335 sys 0m0.388s 00:08:35.335 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.335 ************************************ 00:08:35.335 13:02:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:35.335 END TEST exit_on_failed_rpc_init 00:08:35.335 ************************************ 00:08:35.594 13:02:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:35.594 00:08:35.594 real 0m14.401s 00:08:35.594 user 0m13.607s 00:08:35.594 sys 0m1.643s 00:08:35.594 13:02:23 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.594 13:02:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.594 ************************************ 00:08:35.594 END TEST skip_rpc 00:08:35.594 ************************************ 00:08:35.594 13:02:24 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:35.594 13:02:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.594 13:02:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.594 13:02:24 -- common/autotest_common.sh@10 -- # set +x 00:08:35.594 ************************************ 00:08:35.594 START TEST rpc_client 00:08:35.594 ************************************ 00:08:35.594 13:02:24 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:35.594 * Looking for test storage... 00:08:35.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:35.594 13:02:24 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:35.594 13:02:24 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:08:35.594 13:02:24 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:35.854 13:02:24 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.854 13:02:24 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:35.854 13:02:24 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.854 13:02:24 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:35.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.854 --rc genhtml_branch_coverage=1 00:08:35.854 --rc genhtml_function_coverage=1 00:08:35.854 --rc genhtml_legend=1 00:08:35.854 --rc geninfo_all_blocks=1 00:08:35.854 --rc geninfo_unexecuted_blocks=1 00:08:35.854 00:08:35.854 ' 00:08:35.854 13:02:24 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:35.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.854 --rc genhtml_branch_coverage=1 00:08:35.854 --rc genhtml_function_coverage=1 00:08:35.854 --rc genhtml_legend=1 00:08:35.854 --rc geninfo_all_blocks=1 00:08:35.854 --rc geninfo_unexecuted_blocks=1 00:08:35.854 00:08:35.854 ' 00:08:35.855 13:02:24 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:35.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.855 --rc genhtml_branch_coverage=1 00:08:35.855 --rc genhtml_function_coverage=1 00:08:35.855 --rc genhtml_legend=1 00:08:35.855 --rc geninfo_all_blocks=1 00:08:35.855 --rc geninfo_unexecuted_blocks=1 00:08:35.855 00:08:35.855 ' 00:08:35.855 13:02:24 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:35.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.855 --rc genhtml_branch_coverage=1 00:08:35.855 --rc genhtml_function_coverage=1 00:08:35.855 --rc genhtml_legend=1 00:08:35.855 --rc geninfo_all_blocks=1 00:08:35.855 --rc geninfo_unexecuted_blocks=1 00:08:35.855 00:08:35.855 ' 00:08:35.855 13:02:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:35.855 OK 00:08:35.855 13:02:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:35.855 00:08:35.855 real 0m0.239s 00:08:35.855 user 0m0.155s 00:08:35.855 sys 0m0.090s 00:08:35.855 13:02:24 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.855 13:02:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:35.855 ************************************ 00:08:35.855 END TEST rpc_client 00:08:35.855 ************************************ 00:08:35.855 13:02:24 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:35.855 13:02:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.855 13:02:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.855 13:02:24 -- common/autotest_common.sh@10 -- # set +x 00:08:35.855 ************************************ 00:08:35.855 START TEST json_config 00:08:35.855 ************************************ 00:08:35.855 13:02:24 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:35.855 13:02:24 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:35.855 13:02:24 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:08:35.855 13:02:24 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:35.855 13:02:24 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:35.855 13:02:24 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.855 13:02:24 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.115 13:02:24 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.115 13:02:24 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.115 13:02:24 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.115 13:02:24 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.115 13:02:24 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.115 13:02:24 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.116 13:02:24 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.116 13:02:24 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.116 13:02:24 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.116 13:02:24 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:36.116 13:02:24 json_config -- scripts/common.sh@345 -- # : 1 00:08:36.116 13:02:24 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.116 13:02:24 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.116 13:02:24 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:36.116 13:02:24 json_config -- scripts/common.sh@353 -- # local d=1 00:08:36.116 13:02:24 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.116 13:02:24 json_config -- scripts/common.sh@355 -- # echo 1 00:08:36.116 13:02:24 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.116 13:02:24 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:36.116 13:02:24 json_config -- scripts/common.sh@353 -- # local d=2 00:08:36.116 13:02:24 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.116 13:02:24 json_config -- scripts/common.sh@355 -- # echo 2 00:08:36.116 13:02:24 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.116 13:02:24 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.116 13:02:24 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.116 13:02:24 json_config -- scripts/common.sh@368 -- # return 0 00:08:36.116 13:02:24 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.116 13:02:24 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:36.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.116 --rc genhtml_branch_coverage=1 00:08:36.116 --rc genhtml_function_coverage=1 00:08:36.116 --rc genhtml_legend=1 00:08:36.116 --rc geninfo_all_blocks=1 00:08:36.116 --rc geninfo_unexecuted_blocks=1 00:08:36.116 00:08:36.116 ' 00:08:36.116 13:02:24 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:36.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.116 --rc genhtml_branch_coverage=1 00:08:36.116 --rc genhtml_function_coverage=1 00:08:36.116 --rc genhtml_legend=1 00:08:36.116 --rc geninfo_all_blocks=1 00:08:36.116 --rc geninfo_unexecuted_blocks=1 00:08:36.116 00:08:36.116 ' 00:08:36.116 13:02:24 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:36.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.116 --rc genhtml_branch_coverage=1 00:08:36.116 --rc genhtml_function_coverage=1 00:08:36.116 --rc genhtml_legend=1 00:08:36.116 --rc geninfo_all_blocks=1 00:08:36.116 --rc geninfo_unexecuted_blocks=1 00:08:36.116 00:08:36.116 ' 00:08:36.116 13:02:24 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:36.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.116 --rc genhtml_branch_coverage=1 00:08:36.116 --rc genhtml_function_coverage=1 00:08:36.116 --rc genhtml_legend=1 00:08:36.116 --rc geninfo_all_blocks=1 00:08:36.116 --rc geninfo_unexecuted_blocks=1 00:08:36.116 00:08:36.116 ' 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.116 13:02:24 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.116 13:02:24 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.116 13:02:24 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.116 13:02:24 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.116 13:02:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.116 13:02:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.116 13:02:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.116 13:02:24 json_config -- paths/export.sh@5 -- # export PATH 00:08:36.116 13:02:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@51 -- # : 0 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:36.116 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:36.116 13:02:24 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:36.116 INFO: JSON configuration test init 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:36.116 13:02:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:36.116 13:02:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:36.116 13:02:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:36.116 13:02:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:36.116 13:02:24 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:36.117 13:02:24 json_config -- json_config/common.sh@9 -- # local app=target 00:08:36.117 13:02:24 json_config -- json_config/common.sh@10 -- # shift 00:08:36.117 13:02:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:36.117 13:02:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:36.117 13:02:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:36.117 13:02:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:36.117 13:02:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:36.117 13:02:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59397 00:08:36.117 13:02:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:36.117 Waiting for target to run... 00:08:36.117 13:02:24 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:36.117 13:02:24 json_config -- json_config/common.sh@25 -- # waitforlisten 59397 /var/tmp/spdk_tgt.sock 00:08:36.117 13:02:24 json_config -- common/autotest_common.sh@835 -- # '[' -z 59397 ']' 00:08:36.117 13:02:24 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:36.117 13:02:24 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.117 13:02:24 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:36.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:36.117 13:02:24 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.117 13:02:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:36.117 [2024-12-06 13:02:24.613173] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:36.117 [2024-12-06 13:02:24.613274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59397 ] 00:08:36.689 [2024-12-06 13:02:25.026288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.689 [2024-12-06 13:02:25.076480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.259 00:08:37.259 13:02:25 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.259 13:02:25 json_config -- common/autotest_common.sh@868 -- # return 0 00:08:37.259 13:02:25 json_config -- json_config/common.sh@26 -- # echo '' 00:08:37.259 13:02:25 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:37.259 13:02:25 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:37.259 13:02:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.259 13:02:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:37.259 13:02:25 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:37.259 13:02:25 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:37.259 13:02:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.259 13:02:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:37.259 13:02:25 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:37.259 13:02:25 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:37.259 13:02:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:37.826 13:02:26 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:37.826 13:02:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:37.826 13:02:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.826 13:02:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:37.826 13:02:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:37.826 13:02:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:37.826 13:02:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:37.826 13:02:26 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:37.826 13:02:26 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:37.826 13:02:26 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:37.826 13:02:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:37.826 13:02:26 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@54 -- # sort 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:38.086 13:02:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.086 13:02:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:38.086 13:02:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.086 13:02:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:38.086 13:02:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:38.086 13:02:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:38.344 MallocForNvmf0 00:08:38.344 13:02:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:38.344 13:02:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:38.910 MallocForNvmf1 00:08:38.910 13:02:27 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:38.910 13:02:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:38.910 [2024-12-06 13:02:27.541628] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.168 13:02:27 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:39.168 13:02:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:39.427 13:02:28 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:39.427 13:02:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:39.995 13:02:28 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:39.995 13:02:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:40.254 13:02:28 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:40.254 13:02:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:40.513 [2024-12-06 13:02:28.950510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:40.513 13:02:28 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:08:40.513 13:02:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.513 13:02:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:40.513 13:02:29 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:08:40.513 13:02:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.513 13:02:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:40.513 13:02:29 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:08:40.513 13:02:29 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:40.513 13:02:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:40.772 MallocBdevForConfigChangeCheck 00:08:40.772 13:02:29 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:08:40.772 13:02:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.772 13:02:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:41.030 13:02:29 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:08:41.030 13:02:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:41.289 INFO: shutting down applications... 00:08:41.289 13:02:29 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:08:41.289 13:02:29 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:08:41.289 13:02:29 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:08:41.289 13:02:29 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:08:41.289 13:02:29 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:41.556 Calling clear_iscsi_subsystem 00:08:41.556 Calling clear_nvmf_subsystem 00:08:41.556 Calling clear_nbd_subsystem 00:08:41.556 Calling clear_ublk_subsystem 00:08:41.556 Calling clear_vhost_blk_subsystem 00:08:41.556 Calling clear_vhost_scsi_subsystem 00:08:41.556 Calling clear_bdev_subsystem 00:08:41.826 13:02:30 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:08:41.826 13:02:30 json_config -- json_config/json_config.sh@350 -- # count=100 00:08:41.826 13:02:30 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:08:41.826 13:02:30 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:41.826 13:02:30 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:41.826 13:02:30 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:42.085 13:02:30 json_config -- json_config/json_config.sh@352 -- # break 00:08:42.085 13:02:30 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:08:42.085 13:02:30 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:08:42.085 13:02:30 json_config -- json_config/common.sh@31 -- # local app=target 00:08:42.085 13:02:30 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:42.085 13:02:30 json_config -- json_config/common.sh@35 -- # [[ -n 59397 ]] 00:08:42.085 13:02:30 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59397 00:08:42.085 13:02:30 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:42.085 13:02:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:42.085 13:02:30 json_config -- json_config/common.sh@41 -- # kill -0 59397 00:08:42.085 13:02:30 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:42.651 13:02:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:42.651 13:02:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:42.651 13:02:31 json_config -- json_config/common.sh@41 -- # kill -0 59397 00:08:42.651 13:02:31 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:42.651 13:02:31 json_config -- json_config/common.sh@43 -- # break 00:08:42.651 13:02:31 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:42.651 13:02:31 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:42.651 SPDK target shutdown done 00:08:42.651 13:02:31 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:42.651 INFO: relaunching applications... 00:08:42.651 13:02:31 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:42.651 13:02:31 json_config -- json_config/common.sh@9 -- # local app=target 00:08:42.651 13:02:31 json_config -- json_config/common.sh@10 -- # shift 00:08:42.651 13:02:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:42.651 13:02:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:42.651 13:02:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:42.651 13:02:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:42.651 13:02:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:42.651 13:02:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59687 00:08:42.651 13:02:31 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:42.651 Waiting for target to run... 00:08:42.651 13:02:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:42.651 13:02:31 json_config -- json_config/common.sh@25 -- # waitforlisten 59687 /var/tmp/spdk_tgt.sock 00:08:42.651 13:02:31 json_config -- common/autotest_common.sh@835 -- # '[' -z 59687 ']' 00:08:42.651 13:02:31 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:42.651 13:02:31 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:42.651 13:02:31 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:42.651 13:02:31 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.651 13:02:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:42.651 [2024-12-06 13:02:31.268961] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:42.651 [2024-12-06 13:02:31.269043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59687 ] 00:08:43.217 [2024-12-06 13:02:31.711337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.217 [2024-12-06 13:02:31.766014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.782 [2024-12-06 13:02:32.117272] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.782 [2024-12-06 13:02:32.149355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:43.782 13:02:32 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.782 13:02:32 json_config -- common/autotest_common.sh@868 -- # return 0 00:08:43.782 00:08:43.782 13:02:32 json_config -- json_config/common.sh@26 -- # echo '' 00:08:43.782 13:02:32 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:43.782 INFO: Checking if target configuration is the same... 00:08:43.782 13:02:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:43.782 13:02:32 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:43.782 13:02:32 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:43.782 13:02:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:43.782 + '[' 2 -ne 2 ']' 00:08:43.782 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:43.782 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:43.782 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:43.782 +++ basename /dev/fd/62 00:08:43.782 ++ mktemp /tmp/62.XXX 00:08:43.782 + tmp_file_1=/tmp/62.SA7 00:08:43.782 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:43.782 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:43.782 + tmp_file_2=/tmp/spdk_tgt_config.json.1cN 00:08:43.782 + ret=0 00:08:43.782 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:44.347 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:44.347 + diff -u /tmp/62.SA7 /tmp/spdk_tgt_config.json.1cN 00:08:44.347 + echo 'INFO: JSON config files are the same' 00:08:44.347 INFO: JSON config files are the same 00:08:44.347 + rm /tmp/62.SA7 /tmp/spdk_tgt_config.json.1cN 00:08:44.347 + exit 0 00:08:44.347 13:02:32 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:44.347 INFO: changing configuration and checking if this can be detected... 00:08:44.347 13:02:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:44.347 13:02:32 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:44.347 13:02:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:44.603 13:02:33 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:44.603 13:02:33 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:44.603 13:02:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:44.603 + '[' 2 -ne 2 ']' 00:08:44.604 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:44.604 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:44.604 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:44.604 +++ basename /dev/fd/62 00:08:44.604 ++ mktemp /tmp/62.XXX 00:08:44.604 + tmp_file_1=/tmp/62.VQ6 00:08:44.604 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:44.604 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:44.604 + tmp_file_2=/tmp/spdk_tgt_config.json.KGB 00:08:44.604 + ret=0 00:08:44.604 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:45.169 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:45.169 + diff -u /tmp/62.VQ6 /tmp/spdk_tgt_config.json.KGB 00:08:45.169 + ret=1 00:08:45.169 + echo '=== Start of file: /tmp/62.VQ6 ===' 00:08:45.169 + cat /tmp/62.VQ6 00:08:45.169 + echo '=== End of file: /tmp/62.VQ6 ===' 00:08:45.169 + echo '' 00:08:45.169 + echo '=== Start of file: /tmp/spdk_tgt_config.json.KGB ===' 00:08:45.169 + cat /tmp/spdk_tgt_config.json.KGB 00:08:45.169 + echo '=== End of file: /tmp/spdk_tgt_config.json.KGB ===' 00:08:45.169 + echo '' 00:08:45.169 + rm /tmp/62.VQ6 /tmp/spdk_tgt_config.json.KGB 00:08:45.169 + exit 1 00:08:45.169 INFO: configuration change detected. 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@324 -- # [[ -n 59687 ]] 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:45.169 13:02:33 json_config -- json_config/json_config.sh@330 -- # killprocess 59687 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@954 -- # '[' -z 59687 ']' 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@958 -- # kill -0 59687 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@959 -- # uname 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59687 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.169 killing process with pid 59687 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59687' 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@973 -- # kill 59687 00:08:45.169 13:02:33 json_config -- common/autotest_common.sh@978 -- # wait 59687 00:08:45.426 13:02:33 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:45.426 13:02:33 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:45.426 13:02:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.426 13:02:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:45.426 13:02:34 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:45.426 INFO: Success 00:08:45.426 13:02:34 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:45.426 00:08:45.426 real 0m9.674s 00:08:45.426 user 0m14.278s 00:08:45.426 sys 0m1.981s 00:08:45.426 13:02:34 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.426 13:02:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:45.426 ************************************ 00:08:45.426 END TEST json_config 00:08:45.426 ************************************ 00:08:45.426 13:02:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:45.426 13:02:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.426 13:02:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.426 13:02:34 -- common/autotest_common.sh@10 -- # set +x 00:08:45.685 ************************************ 00:08:45.685 START TEST json_config_extra_key 00:08:45.685 ************************************ 00:08:45.685 13:02:34 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:45.685 13:02:34 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:45.685 13:02:34 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:08:45.685 13:02:34 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:45.685 13:02:34 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:45.685 13:02:34 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.685 13:02:34 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:45.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.685 --rc genhtml_branch_coverage=1 00:08:45.685 --rc genhtml_function_coverage=1 00:08:45.685 --rc genhtml_legend=1 00:08:45.685 --rc geninfo_all_blocks=1 00:08:45.685 --rc geninfo_unexecuted_blocks=1 00:08:45.685 00:08:45.685 ' 00:08:45.685 13:02:34 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:45.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.685 --rc genhtml_branch_coverage=1 00:08:45.685 --rc genhtml_function_coverage=1 00:08:45.685 --rc genhtml_legend=1 00:08:45.685 --rc geninfo_all_blocks=1 00:08:45.685 --rc geninfo_unexecuted_blocks=1 00:08:45.685 00:08:45.685 ' 00:08:45.685 13:02:34 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:45.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.685 --rc genhtml_branch_coverage=1 00:08:45.685 --rc genhtml_function_coverage=1 00:08:45.685 --rc genhtml_legend=1 00:08:45.685 --rc geninfo_all_blocks=1 00:08:45.685 --rc geninfo_unexecuted_blocks=1 00:08:45.685 00:08:45.685 ' 00:08:45.685 13:02:34 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:45.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.685 --rc genhtml_branch_coverage=1 00:08:45.685 --rc genhtml_function_coverage=1 00:08:45.685 --rc genhtml_legend=1 00:08:45.685 --rc geninfo_all_blocks=1 00:08:45.685 --rc geninfo_unexecuted_blocks=1 00:08:45.685 00:08:45.685 ' 00:08:45.685 13:02:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.685 13:02:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.685 13:02:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.685 13:02:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.685 13:02:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.685 13:02:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:45.685 13:02:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.685 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.685 13:02:34 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.685 13:02:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:45.685 13:02:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:45.685 13:02:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:45.685 13:02:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:45.685 13:02:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:45.685 13:02:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:45.685 13:02:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:45.685 13:02:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:45.685 13:02:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:45.685 13:02:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:45.685 INFO: launching applications... 00:08:45.685 13:02:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:45.685 13:02:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:45.685 13:02:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:45.685 13:02:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:45.685 13:02:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:45.686 13:02:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:45.686 13:02:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:45.686 13:02:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:45.686 13:02:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:45.686 13:02:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59871 00:08:45.686 Waiting for target to run... 00:08:45.686 13:02:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:45.686 13:02:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59871 /var/tmp/spdk_tgt.sock 00:08:45.686 13:02:34 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59871 ']' 00:08:45.686 13:02:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:45.686 13:02:34 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:45.686 13:02:34 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:45.686 13:02:34 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:45.686 13:02:34 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.686 13:02:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:45.943 [2024-12-06 13:02:34.334089] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:45.943 [2024-12-06 13:02:34.334189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59871 ] 00:08:46.511 [2024-12-06 13:02:34.843789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.511 [2024-12-06 13:02:34.897151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.768 13:02:35 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.768 13:02:35 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:46.768 00:08:46.768 13:02:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:46.768 INFO: shutting down applications... 00:08:46.768 13:02:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:46.768 13:02:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:46.768 13:02:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:46.768 13:02:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:46.768 13:02:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59871 ]] 00:08:46.768 13:02:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59871 00:08:46.768 13:02:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:46.768 13:02:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:46.768 13:02:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59871 00:08:46.768 13:02:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:47.359 13:02:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:47.359 13:02:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:47.359 13:02:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59871 00:08:47.359 13:02:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:47.359 13:02:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:47.359 13:02:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:47.359 SPDK target shutdown done 00:08:47.359 13:02:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:47.359 Success 00:08:47.359 13:02:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:47.359 00:08:47.359 real 0m1.837s 00:08:47.359 user 0m1.691s 00:08:47.359 sys 0m0.566s 00:08:47.359 13:02:35 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.359 13:02:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:47.359 ************************************ 00:08:47.359 END TEST json_config_extra_key 00:08:47.359 ************************************ 00:08:47.359 13:02:35 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:47.359 13:02:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.359 13:02:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.359 13:02:35 -- common/autotest_common.sh@10 -- # set +x 00:08:47.359 ************************************ 00:08:47.359 START TEST alias_rpc 00:08:47.359 ************************************ 00:08:47.359 13:02:35 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:47.618 * Looking for test storage... 00:08:47.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.618 13:02:36 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:47.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.618 --rc genhtml_branch_coverage=1 00:08:47.618 --rc genhtml_function_coverage=1 00:08:47.618 --rc genhtml_legend=1 00:08:47.618 --rc geninfo_all_blocks=1 00:08:47.618 --rc geninfo_unexecuted_blocks=1 00:08:47.618 00:08:47.618 ' 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:47.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.618 --rc genhtml_branch_coverage=1 00:08:47.618 --rc genhtml_function_coverage=1 00:08:47.618 --rc genhtml_legend=1 00:08:47.618 --rc geninfo_all_blocks=1 00:08:47.618 --rc geninfo_unexecuted_blocks=1 00:08:47.618 00:08:47.618 ' 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:47.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.618 --rc genhtml_branch_coverage=1 00:08:47.618 --rc genhtml_function_coverage=1 00:08:47.618 --rc genhtml_legend=1 00:08:47.618 --rc geninfo_all_blocks=1 00:08:47.618 --rc geninfo_unexecuted_blocks=1 00:08:47.618 00:08:47.618 ' 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:47.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.618 --rc genhtml_branch_coverage=1 00:08:47.618 --rc genhtml_function_coverage=1 00:08:47.618 --rc genhtml_legend=1 00:08:47.618 --rc geninfo_all_blocks=1 00:08:47.618 --rc geninfo_unexecuted_blocks=1 00:08:47.618 00:08:47.618 ' 00:08:47.618 13:02:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:47.618 13:02:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59956 00:08:47.618 13:02:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59956 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59956 ']' 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.618 13:02:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.618 13:02:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.618 [2024-12-06 13:02:36.222052] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:47.618 [2024-12-06 13:02:36.222186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59956 ] 00:08:47.876 [2024-12-06 13:02:36.375469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.876 [2024-12-06 13:02:36.444862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.810 13:02:37 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.810 13:02:37 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:48.810 13:02:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:49.068 13:02:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59956 00:08:49.068 13:02:37 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59956 ']' 00:08:49.068 13:02:37 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59956 00:08:49.068 13:02:37 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:49.068 13:02:37 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.068 13:02:37 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59956 00:08:49.068 13:02:37 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.068 killing process with pid 59956 00:08:49.068 13:02:37 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.068 13:02:37 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59956' 00:08:49.068 13:02:37 alias_rpc -- common/autotest_common.sh@973 -- # kill 59956 00:08:49.068 13:02:37 alias_rpc -- common/autotest_common.sh@978 -- # wait 59956 00:08:49.636 00:08:49.636 real 0m2.072s 00:08:49.636 user 0m2.403s 00:08:49.636 sys 0m0.518s 00:08:49.636 13:02:38 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.636 13:02:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.636 ************************************ 00:08:49.636 END TEST alias_rpc 00:08:49.636 ************************************ 00:08:49.636 13:02:38 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:08:49.636 13:02:38 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:49.636 13:02:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.636 13:02:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.636 13:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:49.636 ************************************ 00:08:49.636 START TEST dpdk_mem_utility 00:08:49.636 ************************************ 00:08:49.636 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:49.636 * Looking for test storage... 00:08:49.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:49.636 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:49.636 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:08:49.636 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:49.896 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.896 13:02:38 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:49.896 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.896 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:49.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.896 --rc genhtml_branch_coverage=1 00:08:49.896 --rc genhtml_function_coverage=1 00:08:49.896 --rc genhtml_legend=1 00:08:49.896 --rc geninfo_all_blocks=1 00:08:49.896 --rc geninfo_unexecuted_blocks=1 00:08:49.896 00:08:49.896 ' 00:08:49.896 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:49.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.896 --rc genhtml_branch_coverage=1 00:08:49.896 --rc genhtml_function_coverage=1 00:08:49.896 --rc genhtml_legend=1 00:08:49.896 --rc geninfo_all_blocks=1 00:08:49.896 --rc geninfo_unexecuted_blocks=1 00:08:49.896 00:08:49.896 ' 00:08:49.896 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:49.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.896 --rc genhtml_branch_coverage=1 00:08:49.896 --rc genhtml_function_coverage=1 00:08:49.896 --rc genhtml_legend=1 00:08:49.896 --rc geninfo_all_blocks=1 00:08:49.896 --rc geninfo_unexecuted_blocks=1 00:08:49.896 00:08:49.896 ' 00:08:49.896 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:49.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.896 --rc genhtml_branch_coverage=1 00:08:49.896 --rc genhtml_function_coverage=1 00:08:49.896 --rc genhtml_legend=1 00:08:49.896 --rc geninfo_all_blocks=1 00:08:49.896 --rc geninfo_unexecuted_blocks=1 00:08:49.896 00:08:49.896 ' 00:08:49.896 13:02:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:49.896 13:02:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60056 00:08:49.896 13:02:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60056 00:08:49.896 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60056 ']' 00:08:49.896 13:02:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:49.896 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.896 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.896 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.896 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.896 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:49.896 [2024-12-06 13:02:38.365344] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:49.896 [2024-12-06 13:02:38.365474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60056 ] 00:08:49.896 [2024-12-06 13:02:38.517584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.154 [2024-12-06 13:02:38.581904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.414 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.414 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:50.414 13:02:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:50.414 13:02:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:50.414 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.414 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:50.414 { 00:08:50.414 "filename": "/tmp/spdk_mem_dump.txt" 00:08:50.414 } 00:08:50.414 13:02:38 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.414 13:02:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:50.414 DPDK memory size 818.000000 MiB in 1 heap(s) 00:08:50.414 1 heaps totaling size 818.000000 MiB 00:08:50.414 size: 818.000000 MiB heap id: 0 00:08:50.414 end heaps---------- 00:08:50.414 9 mempools totaling size 603.782043 MiB 00:08:50.414 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:50.414 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:50.414 size: 100.555481 MiB name: bdev_io_60056 00:08:50.414 size: 50.003479 MiB name: msgpool_60056 00:08:50.414 size: 36.509338 MiB name: fsdev_io_60056 00:08:50.414 size: 21.763794 MiB name: PDU_Pool 00:08:50.414 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:50.414 size: 4.133484 MiB name: evtpool_60056 00:08:50.414 size: 0.026123 MiB name: Session_Pool 00:08:50.414 end mempools------- 00:08:50.414 6 memzones totaling size 4.142822 MiB 00:08:50.414 size: 1.000366 MiB name: RG_ring_0_60056 00:08:50.414 size: 1.000366 MiB name: RG_ring_1_60056 00:08:50.414 size: 1.000366 MiB name: RG_ring_4_60056 00:08:50.414 size: 1.000366 MiB name: RG_ring_5_60056 00:08:50.414 size: 0.125366 MiB name: RG_ring_2_60056 00:08:50.414 size: 0.015991 MiB name: RG_ring_3_60056 00:08:50.414 end memzones------- 00:08:50.414 13:02:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:50.414 heap id: 0 total size: 818.000000 MiB number of busy elements: 234 number of free elements: 15 00:08:50.414 list of free elements. size: 10.817688 MiB 00:08:50.414 element at address: 0x200019200000 with size: 0.999878 MiB 00:08:50.414 element at address: 0x200019400000 with size: 0.999878 MiB 00:08:50.414 element at address: 0x200000400000 with size: 0.996338 MiB 00:08:50.414 element at address: 0x200032000000 with size: 0.994446 MiB 00:08:50.414 element at address: 0x200006400000 with size: 0.959839 MiB 00:08:50.414 element at address: 0x200012c00000 with size: 0.944275 MiB 00:08:50.414 element at address: 0x200019600000 with size: 0.936584 MiB 00:08:50.414 element at address: 0x200000200000 with size: 0.717346 MiB 00:08:50.414 element at address: 0x20001ae00000 with size: 0.571899 MiB 00:08:50.414 element at address: 0x200000c00000 with size: 0.490662 MiB 00:08:50.414 element at address: 0x20000a600000 with size: 0.489441 MiB 00:08:50.414 element at address: 0x200019800000 with size: 0.485657 MiB 00:08:50.414 element at address: 0x200003e00000 with size: 0.481018 MiB 00:08:50.414 element at address: 0x200028200000 with size: 0.397034 MiB 00:08:50.414 element at address: 0x200000800000 with size: 0.353394 MiB 00:08:50.414 list of standard malloc elements. size: 199.253418 MiB 00:08:50.414 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:08:50.414 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:08:50.414 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:50.414 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:08:50.414 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:08:50.414 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:50.414 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:08:50.414 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:50.414 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:08:50.414 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000085a780 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000085a980 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000087f080 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000087f140 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000087f200 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000087f380 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000087f440 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000087f500 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:08:50.414 element at address: 0x20000087f680 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:08:50.414 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:50.414 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:08:50.414 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:08:50.414 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:08:50.414 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:08:50.414 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:08:50.414 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:08:50.414 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:08:50.414 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200003efb980 with size: 0.000183 MiB 00:08:50.415 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:08:50.415 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:08:50.415 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:08:50.415 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:08:50.415 element at address: 0x200028265a40 with size: 0.000183 MiB 00:08:50.416 element at address: 0x200028265b00 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826c700 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826c900 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826d080 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826d140 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826d200 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826d380 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826d440 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826d500 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826d680 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826d740 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826d800 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826d980 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826da40 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826db00 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826de00 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826df80 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826e040 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826e100 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826e280 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826e340 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826e400 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826e580 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826e640 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826e700 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826e880 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826e940 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f000 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f180 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f240 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f300 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f480 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f540 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f600 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f780 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f840 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f900 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:08:50.416 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:08:50.416 list of memzone associated elements. size: 607.928894 MiB 00:08:50.416 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:08:50.416 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:50.416 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:08:50.416 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:50.416 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:08:50.416 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_60056_0 00:08:50.416 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:50.416 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60056_0 00:08:50.416 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:08:50.416 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60056_0 00:08:50.416 element at address: 0x2000199be940 with size: 20.255554 MiB 00:08:50.416 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:50.416 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:08:50.416 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:50.416 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:50.416 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60056_0 00:08:50.416 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:50.416 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60056 00:08:50.416 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:50.416 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60056 00:08:50.416 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:08:50.416 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:50.416 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:08:50.416 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:50.416 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:08:50.416 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:50.416 element at address: 0x200003efba40 with size: 1.008118 MiB 00:08:50.416 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:50.416 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:50.416 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60056 00:08:50.417 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:50.417 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60056 00:08:50.417 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:08:50.417 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60056 00:08:50.417 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:08:50.417 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60056 00:08:50.417 element at address: 0x20000087f740 with size: 0.500488 MiB 00:08:50.417 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60056 00:08:50.417 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:50.417 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60056 00:08:50.417 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:08:50.417 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:50.417 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:08:50.417 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:50.417 element at address: 0x20001987c540 with size: 0.250488 MiB 00:08:50.417 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:50.417 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:08:50.417 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60056 00:08:50.417 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:08:50.417 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60056 00:08:50.417 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:08:50.417 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:50.417 element at address: 0x200028265bc0 with size: 0.023743 MiB 00:08:50.417 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:50.417 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:08:50.417 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60056 00:08:50.417 element at address: 0x20002826bd00 with size: 0.002441 MiB 00:08:50.417 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:50.417 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:08:50.417 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60056 00:08:50.417 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:08:50.417 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60056 00:08:50.417 element at address: 0x20000085a840 with size: 0.000305 MiB 00:08:50.417 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60056 00:08:50.417 element at address: 0x20002826c7c0 with size: 0.000305 MiB 00:08:50.417 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:50.417 13:02:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:50.417 13:02:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60056 00:08:50.417 13:02:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60056 ']' 00:08:50.417 13:02:39 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60056 00:08:50.417 13:02:39 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:50.417 13:02:39 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.417 13:02:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60056 00:08:50.676 13:02:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.676 killing process with pid 60056 00:08:50.676 13:02:39 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.676 13:02:39 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60056' 00:08:50.676 13:02:39 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60056 00:08:50.676 13:02:39 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60056 00:08:50.934 00:08:50.934 real 0m1.354s 00:08:50.934 user 0m1.330s 00:08:50.934 sys 0m0.443s 00:08:50.934 13:02:39 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.934 ************************************ 00:08:50.934 END TEST dpdk_mem_utility 00:08:50.934 ************************************ 00:08:50.934 13:02:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:50.934 13:02:39 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:50.934 13:02:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.934 13:02:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.934 13:02:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.934 ************************************ 00:08:50.934 START TEST event 00:08:50.934 ************************************ 00:08:50.934 13:02:39 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:50.934 * Looking for test storage... 00:08:51.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:51.193 13:02:39 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:51.193 13:02:39 event -- common/autotest_common.sh@1711 -- # lcov --version 00:08:51.193 13:02:39 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:51.193 13:02:39 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:51.193 13:02:39 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.193 13:02:39 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.193 13:02:39 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.193 13:02:39 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.193 13:02:39 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.193 13:02:39 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.193 13:02:39 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.193 13:02:39 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.193 13:02:39 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.193 13:02:39 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.193 13:02:39 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.193 13:02:39 event -- scripts/common.sh@344 -- # case "$op" in 00:08:51.193 13:02:39 event -- scripts/common.sh@345 -- # : 1 00:08:51.193 13:02:39 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.193 13:02:39 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.193 13:02:39 event -- scripts/common.sh@365 -- # decimal 1 00:08:51.193 13:02:39 event -- scripts/common.sh@353 -- # local d=1 00:08:51.193 13:02:39 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.193 13:02:39 event -- scripts/common.sh@355 -- # echo 1 00:08:51.193 13:02:39 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.193 13:02:39 event -- scripts/common.sh@366 -- # decimal 2 00:08:51.193 13:02:39 event -- scripts/common.sh@353 -- # local d=2 00:08:51.193 13:02:39 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.193 13:02:39 event -- scripts/common.sh@355 -- # echo 2 00:08:51.193 13:02:39 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.193 13:02:39 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.193 13:02:39 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.193 13:02:39 event -- scripts/common.sh@368 -- # return 0 00:08:51.193 13:02:39 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.193 13:02:39 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:51.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.193 --rc genhtml_branch_coverage=1 00:08:51.193 --rc genhtml_function_coverage=1 00:08:51.193 --rc genhtml_legend=1 00:08:51.193 --rc geninfo_all_blocks=1 00:08:51.193 --rc geninfo_unexecuted_blocks=1 00:08:51.193 00:08:51.193 ' 00:08:51.193 13:02:39 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:51.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.193 --rc genhtml_branch_coverage=1 00:08:51.193 --rc genhtml_function_coverage=1 00:08:51.193 --rc genhtml_legend=1 00:08:51.193 --rc geninfo_all_blocks=1 00:08:51.193 --rc geninfo_unexecuted_blocks=1 00:08:51.193 00:08:51.193 ' 00:08:51.193 13:02:39 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:51.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.193 --rc genhtml_branch_coverage=1 00:08:51.193 --rc genhtml_function_coverage=1 00:08:51.193 --rc genhtml_legend=1 00:08:51.193 --rc geninfo_all_blocks=1 00:08:51.193 --rc geninfo_unexecuted_blocks=1 00:08:51.193 00:08:51.193 ' 00:08:51.193 13:02:39 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:51.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.194 --rc genhtml_branch_coverage=1 00:08:51.194 --rc genhtml_function_coverage=1 00:08:51.194 --rc genhtml_legend=1 00:08:51.194 --rc geninfo_all_blocks=1 00:08:51.194 --rc geninfo_unexecuted_blocks=1 00:08:51.194 00:08:51.194 ' 00:08:51.194 13:02:39 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:51.194 13:02:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:51.194 13:02:39 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:51.194 13:02:39 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:51.194 13:02:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.194 13:02:39 event -- common/autotest_common.sh@10 -- # set +x 00:08:51.194 ************************************ 00:08:51.194 START TEST event_perf 00:08:51.194 ************************************ 00:08:51.194 13:02:39 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:51.194 Running I/O for 1 seconds...[2024-12-06 13:02:39.714195] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:51.194 [2024-12-06 13:02:39.714285] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60151 ] 00:08:51.452 [2024-12-06 13:02:39.867226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.452 [2024-12-06 13:02:39.936836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.452 [2024-12-06 13:02:39.937003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.452 [2024-12-06 13:02:39.937088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.452 Running I/O for 1 seconds...[2024-12-06 13:02:39.937338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.387 00:08:52.387 lcore 0: 198560 00:08:52.387 lcore 1: 198560 00:08:52.387 lcore 2: 198561 00:08:52.387 lcore 3: 198562 00:08:52.387 done. 00:08:52.387 00:08:52.387 real 0m1.289s 00:08:52.387 user 0m4.114s 00:08:52.387 sys 0m0.054s 00:08:52.387 13:02:40 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.387 13:02:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:52.387 ************************************ 00:08:52.387 END TEST event_perf 00:08:52.387 ************************************ 00:08:52.387 13:02:41 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:52.387 13:02:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:52.387 13:02:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.387 13:02:41 event -- common/autotest_common.sh@10 -- # set +x 00:08:52.645 ************************************ 00:08:52.645 START TEST event_reactor 00:08:52.645 ************************************ 00:08:52.645 13:02:41 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:52.645 [2024-12-06 13:02:41.051057] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:52.645 [2024-12-06 13:02:41.051169] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60184 ] 00:08:52.645 [2024-12-06 13:02:41.205623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.645 [2024-12-06 13:02:41.263107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.022 test_start 00:08:54.022 oneshot 00:08:54.022 tick 100 00:08:54.022 tick 100 00:08:54.022 tick 250 00:08:54.022 tick 100 00:08:54.022 tick 100 00:08:54.022 tick 250 00:08:54.022 tick 100 00:08:54.022 tick 500 00:08:54.022 tick 100 00:08:54.022 tick 100 00:08:54.022 tick 250 00:08:54.022 tick 100 00:08:54.022 tick 100 00:08:54.022 test_end 00:08:54.022 00:08:54.022 real 0m1.277s 00:08:54.022 user 0m1.124s 00:08:54.022 sys 0m0.047s 00:08:54.022 13:02:42 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.022 ************************************ 00:08:54.022 END TEST event_reactor 00:08:54.022 ************************************ 00:08:54.022 13:02:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:54.022 13:02:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:54.022 13:02:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:54.022 13:02:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.022 13:02:42 event -- common/autotest_common.sh@10 -- # set +x 00:08:54.022 ************************************ 00:08:54.022 START TEST event_reactor_perf 00:08:54.022 ************************************ 00:08:54.022 13:02:42 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:54.022 [2024-12-06 13:02:42.381412] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:54.022 [2024-12-06 13:02:42.381602] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60214 ] 00:08:54.022 [2024-12-06 13:02:42.535725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.022 [2024-12-06 13:02:42.589626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.397 test_start 00:08:55.397 test_end 00:08:55.397 Performance: 376765 events per second 00:08:55.397 00:08:55.397 real 0m1.275s 00:08:55.397 user 0m1.117s 00:08:55.397 sys 0m0.052s 00:08:55.397 13:02:43 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.397 ************************************ 00:08:55.397 END TEST event_reactor_perf 00:08:55.397 ************************************ 00:08:55.397 13:02:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:55.397 13:02:43 event -- event/event.sh@49 -- # uname -s 00:08:55.397 13:02:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:55.397 13:02:43 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:55.397 13:02:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.397 13:02:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.397 13:02:43 event -- common/autotest_common.sh@10 -- # set +x 00:08:55.397 ************************************ 00:08:55.397 START TEST event_scheduler 00:08:55.397 ************************************ 00:08:55.397 13:02:43 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:55.397 * Looking for test storage... 00:08:55.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:55.397 13:02:43 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:55.397 13:02:43 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:08:55.397 13:02:43 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:55.397 13:02:43 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.397 13:02:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:55.397 13:02:43 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.397 13:02:43 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:55.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.397 --rc genhtml_branch_coverage=1 00:08:55.397 --rc genhtml_function_coverage=1 00:08:55.397 --rc genhtml_legend=1 00:08:55.397 --rc geninfo_all_blocks=1 00:08:55.397 --rc geninfo_unexecuted_blocks=1 00:08:55.397 00:08:55.397 ' 00:08:55.397 13:02:43 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:55.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.397 --rc genhtml_branch_coverage=1 00:08:55.397 --rc genhtml_function_coverage=1 00:08:55.397 --rc genhtml_legend=1 00:08:55.397 --rc geninfo_all_blocks=1 00:08:55.397 --rc geninfo_unexecuted_blocks=1 00:08:55.397 00:08:55.397 ' 00:08:55.397 13:02:43 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:55.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.397 --rc genhtml_branch_coverage=1 00:08:55.397 --rc genhtml_function_coverage=1 00:08:55.397 --rc genhtml_legend=1 00:08:55.397 --rc geninfo_all_blocks=1 00:08:55.397 --rc geninfo_unexecuted_blocks=1 00:08:55.397 00:08:55.397 ' 00:08:55.397 13:02:43 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:55.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.397 --rc genhtml_branch_coverage=1 00:08:55.397 --rc genhtml_function_coverage=1 00:08:55.397 --rc genhtml_legend=1 00:08:55.397 --rc geninfo_all_blocks=1 00:08:55.397 --rc geninfo_unexecuted_blocks=1 00:08:55.397 00:08:55.397 ' 00:08:55.398 13:02:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:55.398 13:02:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60289 00:08:55.398 13:02:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:55.398 13:02:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:55.398 13:02:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60289 00:08:55.398 13:02:43 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60289 ']' 00:08:55.398 13:02:43 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.398 13:02:43 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.398 13:02:43 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.398 13:02:43 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.398 13:02:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:55.398 [2024-12-06 13:02:43.950549] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:55.398 [2024-12-06 13:02:43.950660] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60289 ] 00:08:55.693 [2024-12-06 13:02:44.110360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.693 [2024-12-06 13:02:44.181832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.693 [2024-12-06 13:02:44.181967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.693 [2024-12-06 13:02:44.182002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.693 [2024-12-06 13:02:44.182005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.693 13:02:44 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.693 13:02:44 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:55.693 13:02:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:55.693 13:02:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.693 13:02:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:55.693 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:55.693 POWER: Cannot set governor of lcore 0 to userspace 00:08:55.693 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:55.693 POWER: Cannot set governor of lcore 0 to performance 00:08:55.693 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:55.693 POWER: Cannot set governor of lcore 0 to userspace 00:08:55.694 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:55.694 POWER: Cannot set governor of lcore 0 to userspace 00:08:55.694 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:55.694 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:55.694 POWER: Unable to set Power Management Environment for lcore 0 00:08:55.694 [2024-12-06 13:02:44.247611] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:55.694 [2024-12-06 13:02:44.247876] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:55.694 [2024-12-06 13:02:44.247891] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:55.694 [2024-12-06 13:02:44.247906] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:55.694 [2024-12-06 13:02:44.247916] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:55.694 [2024-12-06 13:02:44.247925] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:55.694 13:02:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.694 13:02:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:55.694 13:02:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.694 13:02:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 [2024-12-06 13:02:44.352982] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:55.953 13:02:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.953 13:02:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:55.953 13:02:44 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.953 13:02:44 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 ************************************ 00:08:55.953 START TEST scheduler_create_thread 00:08:55.953 ************************************ 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 2 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 3 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 4 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 5 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 6 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 7 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 8 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 9 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 10 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.953 13:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:56.898 13:02:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.898 00:08:56.898 real 0m1.171s 00:08:56.898 user 0m0.018s 00:08:56.898 sys 0m0.006s 00:08:56.899 13:02:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.899 13:02:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.156 ************************************ 00:08:57.156 END TEST scheduler_create_thread 00:08:57.156 ************************************ 00:08:57.156 13:02:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:57.156 13:02:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60289 00:08:57.156 13:02:45 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60289 ']' 00:08:57.156 13:02:45 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60289 00:08:57.156 13:02:45 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:57.156 13:02:45 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.156 13:02:45 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60289 00:08:57.156 13:02:45 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:57.156 13:02:45 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:57.156 killing process with pid 60289 00:08:57.156 13:02:45 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60289' 00:08:57.156 13:02:45 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60289 00:08:57.156 13:02:45 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60289 00:08:57.414 [2024-12-06 13:02:46.015648] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:57.673 00:08:57.673 real 0m2.535s 00:08:57.673 user 0m2.796s 00:08:57.673 sys 0m0.409s 00:08:57.673 13:02:46 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.673 ************************************ 00:08:57.673 END TEST event_scheduler 00:08:57.673 13:02:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:57.673 ************************************ 00:08:57.673 13:02:46 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:57.673 13:02:46 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:57.673 13:02:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.673 13:02:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.673 13:02:46 event -- common/autotest_common.sh@10 -- # set +x 00:08:57.673 ************************************ 00:08:57.673 START TEST app_repeat 00:08:57.673 ************************************ 00:08:57.673 13:02:46 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60371 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:57.673 Process app_repeat pid: 60371 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60371' 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:57.673 spdk_app_start Round 0 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:57.673 13:02:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60371 /var/tmp/spdk-nbd.sock 00:08:57.673 13:02:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60371 ']' 00:08:57.673 13:02:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:57.673 13:02:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:57.673 13:02:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:57.673 13:02:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.673 13:02:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:57.673 [2024-12-06 13:02:46.310485] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:57.673 [2024-12-06 13:02:46.310574] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60371 ] 00:08:57.933 [2024-12-06 13:02:46.453709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:57.933 [2024-12-06 13:02:46.534380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.933 [2024-12-06 13:02:46.534392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.192 13:02:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.192 13:02:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:58.192 13:02:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:58.451 Malloc0 00:08:58.451 13:02:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:58.709 Malloc1 00:08:58.709 13:02:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:58.709 13:02:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.709 13:02:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:58.709 13:02:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:58.709 13:02:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.709 13:02:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:58.709 13:02:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:58.709 13:02:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.709 13:02:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:58.710 13:02:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:58.710 13:02:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.710 13:02:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:58.710 13:02:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:58.710 13:02:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:58.710 13:02:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:58.710 13:02:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:58.968 /dev/nbd0 00:08:58.968 13:02:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:58.968 13:02:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:58.968 1+0 records in 00:08:58.968 1+0 records out 00:08:58.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030068 s, 13.6 MB/s 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:58.968 13:02:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:58.968 13:02:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:58.968 13:02:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:58.968 13:02:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:59.535 /dev/nbd1 00:08:59.535 13:02:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:59.535 13:02:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:59.535 1+0 records in 00:08:59.535 1+0 records out 00:08:59.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037018 s, 11.1 MB/s 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:59.535 13:02:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:59.535 13:02:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.535 13:02:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:59.535 13:02:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:59.535 13:02:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.535 13:02:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:59.794 { 00:08:59.794 "bdev_name": "Malloc0", 00:08:59.794 "nbd_device": "/dev/nbd0" 00:08:59.794 }, 00:08:59.794 { 00:08:59.794 "bdev_name": "Malloc1", 00:08:59.794 "nbd_device": "/dev/nbd1" 00:08:59.794 } 00:08:59.794 ]' 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:59.794 { 00:08:59.794 "bdev_name": "Malloc0", 00:08:59.794 "nbd_device": "/dev/nbd0" 00:08:59.794 }, 00:08:59.794 { 00:08:59.794 "bdev_name": "Malloc1", 00:08:59.794 "nbd_device": "/dev/nbd1" 00:08:59.794 } 00:08:59.794 ]' 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:59.794 /dev/nbd1' 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:59.794 /dev/nbd1' 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:59.794 256+0 records in 00:08:59.794 256+0 records out 00:08:59.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00757412 s, 138 MB/s 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:59.794 256+0 records in 00:08:59.794 256+0 records out 00:08:59.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208406 s, 50.3 MB/s 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:59.794 256+0 records in 00:08:59.794 256+0 records out 00:08:59.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243557 s, 43.1 MB/s 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:59.794 13:02:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:59.795 13:02:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:59.795 13:02:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.795 13:02:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:59.795 13:02:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:59.795 13:02:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:59.795 13:02:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.795 13:02:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:00.361 13:02:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:00.361 13:02:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:00.361 13:02:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:00.361 13:02:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.361 13:02:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.361 13:02:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:00.361 13:02:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:00.361 13:02:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.361 13:02:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.361 13:02:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:00.620 13:02:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:00.620 13:02:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:00.620 13:02:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:00.620 13:02:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.620 13:02:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.620 13:02:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:00.620 13:02:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:00.620 13:02:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.620 13:02:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:00.620 13:02:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.620 13:02:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:00.878 13:02:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:00.878 13:02:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:00.878 13:02:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:00.878 13:02:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:00.878 13:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:00.878 13:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:00.878 13:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:00.878 13:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:00.878 13:02:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:00.878 13:02:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:00.878 13:02:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:00.878 13:02:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:00.878 13:02:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:01.136 13:02:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:01.395 [2024-12-06 13:02:49.922572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:01.395 [2024-12-06 13:02:49.977857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.395 [2024-12-06 13:02:49.977869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.395 [2024-12-06 13:02:50.032353] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:01.395 [2024-12-06 13:02:50.032419] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:04.696 13:02:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:04.696 spdk_app_start Round 1 00:09:04.696 13:02:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:04.696 13:02:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60371 /var/tmp/spdk-nbd.sock 00:09:04.696 13:02:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60371 ']' 00:09:04.696 13:02:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:04.696 13:02:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:04.696 13:02:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:04.696 13:02:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.696 13:02:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:04.696 13:02:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.696 13:02:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:04.696 13:02:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:04.954 Malloc0 00:09:04.954 13:02:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:05.213 Malloc1 00:09:05.213 13:02:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:05.213 13:02:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:05.472 /dev/nbd0 00:09:05.472 13:02:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:05.472 13:02:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:05.472 1+0 records in 00:09:05.472 1+0 records out 00:09:05.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574962 s, 7.1 MB/s 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:05.472 13:02:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:05.472 13:02:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.472 13:02:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:05.472 13:02:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:05.733 /dev/nbd1 00:09:05.733 13:02:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:05.733 13:02:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:05.733 1+0 records in 00:09:05.733 1+0 records out 00:09:05.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285904 s, 14.3 MB/s 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:05.733 13:02:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:05.733 13:02:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.733 13:02:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:05.733 13:02:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.733 13:02:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.733 13:02:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:06.302 13:02:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:06.302 { 00:09:06.302 "bdev_name": "Malloc0", 00:09:06.303 "nbd_device": "/dev/nbd0" 00:09:06.303 }, 00:09:06.303 { 00:09:06.303 "bdev_name": "Malloc1", 00:09:06.303 "nbd_device": "/dev/nbd1" 00:09:06.303 } 00:09:06.303 ]' 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:06.303 { 00:09:06.303 "bdev_name": "Malloc0", 00:09:06.303 "nbd_device": "/dev/nbd0" 00:09:06.303 }, 00:09:06.303 { 00:09:06.303 "bdev_name": "Malloc1", 00:09:06.303 "nbd_device": "/dev/nbd1" 00:09:06.303 } 00:09:06.303 ]' 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:06.303 /dev/nbd1' 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:06.303 /dev/nbd1' 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:06.303 256+0 records in 00:09:06.303 256+0 records out 00:09:06.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00929412 s, 113 MB/s 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:06.303 256+0 records in 00:09:06.303 256+0 records out 00:09:06.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271217 s, 38.7 MB/s 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:06.303 256+0 records in 00:09:06.303 256+0 records out 00:09:06.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263381 s, 39.8 MB/s 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.303 13:02:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:06.562 13:02:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:06.562 13:02:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:06.562 13:02:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:06.562 13:02:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.562 13:02:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.562 13:02:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:06.562 13:02:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:06.562 13:02:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.562 13:02:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.562 13:02:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:06.820 13:02:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:06.820 13:02:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:06.820 13:02:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:06.820 13:02:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.820 13:02:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.820 13:02:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:07.078 13:02:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:07.078 13:02:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.078 13:02:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:07.078 13:02:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.078 13:02:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:07.336 13:02:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:07.336 13:02:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:07.336 13:02:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:07.336 13:02:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:07.336 13:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:07.336 13:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:07.336 13:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:07.336 13:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:07.336 13:02:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:07.336 13:02:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:07.336 13:02:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:07.336 13:02:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:07.336 13:02:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:07.901 13:02:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:07.901 [2024-12-06 13:02:56.407219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:07.901 [2024-12-06 13:02:56.466994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.901 [2024-12-06 13:02:56.467007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.901 [2024-12-06 13:02:56.524317] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:07.901 [2024-12-06 13:02:56.524428] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:11.184 13:02:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:11.184 spdk_app_start Round 2 00:09:11.184 13:02:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:11.184 13:02:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60371 /var/tmp/spdk-nbd.sock 00:09:11.184 13:02:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60371 ']' 00:09:11.184 13:02:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:11.184 13:02:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:11.184 13:02:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:11.184 13:02:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.184 13:02:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:11.184 13:02:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.184 13:02:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:11.184 13:02:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:11.443 Malloc0 00:09:11.443 13:02:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:11.702 Malloc1 00:09:11.702 13:03:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:11.702 13:03:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:11.960 /dev/nbd0 00:09:11.960 13:03:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:11.960 13:03:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:11.960 1+0 records in 00:09:11.960 1+0 records out 00:09:11.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245044 s, 16.7 MB/s 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:11.960 13:03:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:11.960 13:03:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.960 13:03:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:11.960 13:03:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:12.218 /dev/nbd1 00:09:12.475 13:03:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:12.475 13:03:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:12.475 1+0 records in 00:09:12.475 1+0 records out 00:09:12.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317609 s, 12.9 MB/s 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:12.475 13:03:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:12.475 13:03:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.475 13:03:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:12.475 13:03:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:12.475 13:03:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.475 13:03:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:12.734 { 00:09:12.734 "bdev_name": "Malloc0", 00:09:12.734 "nbd_device": "/dev/nbd0" 00:09:12.734 }, 00:09:12.734 { 00:09:12.734 "bdev_name": "Malloc1", 00:09:12.734 "nbd_device": "/dev/nbd1" 00:09:12.734 } 00:09:12.734 ]' 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:12.734 { 00:09:12.734 "bdev_name": "Malloc0", 00:09:12.734 "nbd_device": "/dev/nbd0" 00:09:12.734 }, 00:09:12.734 { 00:09:12.734 "bdev_name": "Malloc1", 00:09:12.734 "nbd_device": "/dev/nbd1" 00:09:12.734 } 00:09:12.734 ]' 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:12.734 /dev/nbd1' 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:12.734 /dev/nbd1' 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:12.734 256+0 records in 00:09:12.734 256+0 records out 00:09:12.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00886645 s, 118 MB/s 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:12.734 256+0 records in 00:09:12.734 256+0 records out 00:09:12.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213492 s, 49.1 MB/s 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:12.734 256+0 records in 00:09:12.734 256+0 records out 00:09:12.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022764 s, 46.1 MB/s 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.734 13:03:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:12.995 13:03:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:12.995 13:03:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:12.995 13:03:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:12.995 13:03:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.995 13:03:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.995 13:03:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:12.995 13:03:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:12.995 13:03:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.995 13:03:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.995 13:03:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:13.561 13:03:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:13.561 13:03:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:13.561 13:03:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:13.561 13:03:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.561 13:03:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.561 13:03:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:13.561 13:03:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:13.561 13:03:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.561 13:03:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:13.561 13:03:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.561 13:03:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:13.819 13:03:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:13.819 13:03:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:13.819 13:03:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:13.819 13:03:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:13.819 13:03:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:13.819 13:03:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:13.819 13:03:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:13.819 13:03:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:13.819 13:03:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:13.819 13:03:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:13.819 13:03:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:13.819 13:03:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:13.819 13:03:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:14.078 13:03:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:14.336 [2024-12-06 13:03:02.842946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:14.336 [2024-12-06 13:03:02.906060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.336 [2024-12-06 13:03:02.906072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.336 [2024-12-06 13:03:02.961686] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:14.336 [2024-12-06 13:03:02.961765] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:17.618 13:03:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60371 /var/tmp/spdk-nbd.sock 00:09:17.618 13:03:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60371 ']' 00:09:17.618 13:03:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:17.618 13:03:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:17.618 13:03:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:17.618 13:03:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.618 13:03:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:17.618 13:03:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.618 13:03:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:17.618 13:03:05 event.app_repeat -- event/event.sh@39 -- # killprocess 60371 00:09:17.618 13:03:05 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60371 ']' 00:09:17.618 13:03:05 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60371 00:09:17.618 13:03:05 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:17.618 13:03:05 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.618 13:03:05 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60371 00:09:17.618 13:03:06 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.618 13:03:06 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.618 killing process with pid 60371 00:09:17.618 13:03:06 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60371' 00:09:17.618 13:03:06 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60371 00:09:17.618 13:03:06 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60371 00:09:17.618 spdk_app_start is called in Round 0. 00:09:17.618 Shutdown signal received, stop current app iteration 00:09:17.618 Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 reinitialization... 00:09:17.618 spdk_app_start is called in Round 1. 00:09:17.618 Shutdown signal received, stop current app iteration 00:09:17.618 Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 reinitialization... 00:09:17.618 spdk_app_start is called in Round 2. 00:09:17.618 Shutdown signal received, stop current app iteration 00:09:17.618 Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 reinitialization... 00:09:17.618 spdk_app_start is called in Round 3. 00:09:17.618 Shutdown signal received, stop current app iteration 00:09:17.618 13:03:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:17.618 13:03:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:17.618 00:09:17.618 real 0m19.911s 00:09:17.618 user 0m45.645s 00:09:17.618 sys 0m3.229s 00:09:17.618 13:03:06 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.618 13:03:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:17.618 ************************************ 00:09:17.618 END TEST app_repeat 00:09:17.618 ************************************ 00:09:17.618 13:03:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:17.618 13:03:06 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:17.618 13:03:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.618 13:03:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.618 13:03:06 event -- common/autotest_common.sh@10 -- # set +x 00:09:17.618 ************************************ 00:09:17.618 START TEST cpu_locks 00:09:17.618 ************************************ 00:09:17.618 13:03:06 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:17.876 * Looking for test storage... 00:09:17.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:17.876 13:03:06 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:17.876 13:03:06 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:09:17.876 13:03:06 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:17.876 13:03:06 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.876 13:03:06 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:17.876 13:03:06 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.876 13:03:06 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:17.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.876 --rc genhtml_branch_coverage=1 00:09:17.876 --rc genhtml_function_coverage=1 00:09:17.876 --rc genhtml_legend=1 00:09:17.876 --rc geninfo_all_blocks=1 00:09:17.876 --rc geninfo_unexecuted_blocks=1 00:09:17.876 00:09:17.876 ' 00:09:17.876 13:03:06 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:17.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.876 --rc genhtml_branch_coverage=1 00:09:17.876 --rc genhtml_function_coverage=1 00:09:17.876 --rc genhtml_legend=1 00:09:17.876 --rc geninfo_all_blocks=1 00:09:17.876 --rc geninfo_unexecuted_blocks=1 00:09:17.876 00:09:17.876 ' 00:09:17.876 13:03:06 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:17.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.876 --rc genhtml_branch_coverage=1 00:09:17.876 --rc genhtml_function_coverage=1 00:09:17.876 --rc genhtml_legend=1 00:09:17.876 --rc geninfo_all_blocks=1 00:09:17.876 --rc geninfo_unexecuted_blocks=1 00:09:17.876 00:09:17.876 ' 00:09:17.876 13:03:06 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:17.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.876 --rc genhtml_branch_coverage=1 00:09:17.876 --rc genhtml_function_coverage=1 00:09:17.876 --rc genhtml_legend=1 00:09:17.876 --rc geninfo_all_blocks=1 00:09:17.876 --rc geninfo_unexecuted_blocks=1 00:09:17.876 00:09:17.876 ' 00:09:17.876 13:03:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:17.876 13:03:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:17.876 13:03:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:17.876 13:03:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:17.876 13:03:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.876 13:03:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.876 13:03:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:17.876 ************************************ 00:09:17.876 START TEST default_locks 00:09:17.876 ************************************ 00:09:17.876 13:03:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:17.876 13:03:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61007 00:09:17.876 13:03:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61007 00:09:17.876 13:03:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:17.876 13:03:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61007 ']' 00:09:17.876 13:03:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.876 13:03:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.876 13:03:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.876 13:03:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.876 13:03:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:18.134 [2024-12-06 13:03:06.517553] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:18.134 [2024-12-06 13:03:06.517652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61007 ] 00:09:18.134 [2024-12-06 13:03:06.670363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.134 [2024-12-06 13:03:06.739253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.066 13:03:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.066 13:03:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:19.066 13:03:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61007 00:09:19.066 13:03:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:19.066 13:03:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61007 00:09:19.630 13:03:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61007 00:09:19.631 13:03:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 61007 ']' 00:09:19.631 13:03:07 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 61007 00:09:19.631 13:03:07 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:19.631 13:03:07 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.631 13:03:07 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61007 00:09:19.631 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.631 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.631 killing process with pid 61007 00:09:19.631 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61007' 00:09:19.631 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 61007 00:09:19.631 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 61007 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61007 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61007 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 61007 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61007 ']' 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:19.889 ERROR: process (pid: 61007) is no longer running 00:09:19.889 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61007) - No such process 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:19.889 00:09:19.889 real 0m1.967s 00:09:19.889 user 0m2.131s 00:09:19.889 sys 0m0.606s 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.889 ************************************ 00:09:19.889 END TEST default_locks 00:09:19.889 ************************************ 00:09:19.889 13:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:19.889 13:03:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:19.889 13:03:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.889 13:03:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.889 13:03:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:19.889 ************************************ 00:09:19.889 START TEST default_locks_via_rpc 00:09:19.889 ************************************ 00:09:19.889 13:03:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:19.889 13:03:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61071 00:09:19.889 13:03:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61071 00:09:19.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.889 13:03:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61071 ']' 00:09:19.889 13:03:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:19.889 13:03:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.889 13:03:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.889 13:03:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.889 13:03:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.889 13:03:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.147 [2024-12-06 13:03:08.535215] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:20.147 [2024-12-06 13:03:08.535588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61071 ] 00:09:20.147 [2024-12-06 13:03:08.677019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.147 [2024-12-06 13:03:08.734567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61071 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61071 00:09:21.081 13:03:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:21.648 13:03:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61071 00:09:21.648 13:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 61071 ']' 00:09:21.648 13:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 61071 00:09:21.648 13:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:21.648 13:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.648 13:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61071 00:09:21.648 killing process with pid 61071 00:09:21.648 13:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.648 13:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.648 13:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61071' 00:09:21.648 13:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 61071 00:09:21.648 13:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 61071 00:09:21.907 ************************************ 00:09:21.907 END TEST default_locks_via_rpc 00:09:21.907 ************************************ 00:09:21.907 00:09:21.907 real 0m1.994s 00:09:21.907 user 0m2.181s 00:09:21.907 sys 0m0.602s 00:09:21.907 13:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.907 13:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.907 13:03:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:21.907 13:03:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.907 13:03:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.907 13:03:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:21.907 ************************************ 00:09:21.907 START TEST non_locking_app_on_locked_coremask 00:09:21.907 ************************************ 00:09:21.907 13:03:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:21.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.907 13:03:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61140 00:09:21.907 13:03:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61140 /var/tmp/spdk.sock 00:09:21.907 13:03:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:21.907 13:03:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61140 ']' 00:09:21.907 13:03:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.907 13:03:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.907 13:03:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.907 13:03:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.907 13:03:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:22.165 [2024-12-06 13:03:10.588288] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:22.165 [2024-12-06 13:03:10.588424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61140 ] 00:09:22.165 [2024-12-06 13:03:10.736902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.165 [2024-12-06 13:03:10.799134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:22.728 13:03:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.728 13:03:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:22.728 13:03:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61154 00:09:22.728 13:03:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61154 /var/tmp/spdk2.sock 00:09:22.728 13:03:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:22.728 13:03:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61154 ']' 00:09:22.728 13:03:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:22.728 13:03:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.728 13:03:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:22.728 13:03:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.728 13:03:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:22.728 [2024-12-06 13:03:11.151492] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:22.728 [2024-12-06 13:03:11.151840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61154 ] 00:09:22.728 [2024-12-06 13:03:11.311215] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:22.728 [2024-12-06 13:03:11.311301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.986 [2024-12-06 13:03:11.445565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.553 13:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.553 13:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:23.553 13:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61140 00:09:23.553 13:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61140 00:09:23.553 13:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:24.489 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61140 00:09:24.489 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61140 ']' 00:09:24.489 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61140 00:09:24.489 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:24.489 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.489 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61140 00:09:24.489 killing process with pid 61140 00:09:24.489 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.489 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.489 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61140' 00:09:24.489 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61140 00:09:24.489 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61140 00:09:25.424 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61154 00:09:25.424 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61154 ']' 00:09:25.424 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61154 00:09:25.424 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:25.424 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.424 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61154 00:09:25.424 killing process with pid 61154 00:09:25.424 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.424 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.424 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61154' 00:09:25.424 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61154 00:09:25.424 13:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61154 00:09:25.683 ************************************ 00:09:25.683 END TEST non_locking_app_on_locked_coremask 00:09:25.683 ************************************ 00:09:25.683 00:09:25.683 real 0m3.745s 00:09:25.683 user 0m4.044s 00:09:25.683 sys 0m1.158s 00:09:25.683 13:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.683 13:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:25.683 13:03:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:25.683 13:03:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.683 13:03:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.683 13:03:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:25.683 ************************************ 00:09:25.683 START TEST locking_app_on_unlocked_coremask 00:09:25.683 ************************************ 00:09:25.683 13:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:25.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.683 13:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61239 00:09:25.683 13:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61239 /var/tmp/spdk.sock 00:09:25.683 13:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61239 ']' 00:09:25.683 13:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.683 13:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.683 13:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:25.683 13:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.683 13:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.683 13:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:25.942 [2024-12-06 13:03:14.404332] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:25.942 [2024-12-06 13:03:14.404442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61239 ] 00:09:25.942 [2024-12-06 13:03:14.555748] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:25.942 [2024-12-06 13:03:14.555810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.201 [2024-12-06 13:03:14.625845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.777 13:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.777 13:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:26.777 13:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61267 00:09:26.777 13:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:26.777 13:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61267 /var/tmp/spdk2.sock 00:09:26.777 13:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61267 ']' 00:09:26.777 13:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:26.777 13:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.777 13:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:26.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:26.777 13:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.777 13:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:27.096 [2024-12-06 13:03:15.448802] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:27.096 [2024-12-06 13:03:15.449372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61267 ] 00:09:27.096 [2024-12-06 13:03:15.613493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.355 [2024-12-06 13:03:15.740590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.924 13:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.924 13:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:27.924 13:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61267 00:09:27.924 13:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61267 00:09:27.924 13:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:28.861 13:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61239 00:09:28.861 13:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61239 ']' 00:09:28.861 13:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61239 00:09:28.861 13:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:28.861 13:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.861 13:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61239 00:09:28.861 killing process with pid 61239 00:09:28.861 13:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.861 13:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.861 13:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61239' 00:09:28.861 13:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61239 00:09:28.861 13:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61239 00:09:29.798 13:03:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61267 00:09:29.798 13:03:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61267 ']' 00:09:29.798 13:03:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61267 00:09:29.798 13:03:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:29.798 13:03:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.798 13:03:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61267 00:09:29.798 killing process with pid 61267 00:09:29.798 13:03:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.798 13:03:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.798 13:03:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61267' 00:09:29.798 13:03:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61267 00:09:29.798 13:03:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61267 00:09:30.058 00:09:30.058 real 0m4.294s 00:09:30.058 user 0m4.931s 00:09:30.058 sys 0m1.144s 00:09:30.058 13:03:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.058 ************************************ 00:09:30.058 END TEST locking_app_on_unlocked_coremask 00:09:30.058 ************************************ 00:09:30.058 13:03:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.058 13:03:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:30.058 13:03:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.058 13:03:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.058 13:03:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:30.058 ************************************ 00:09:30.058 START TEST locking_app_on_locked_coremask 00:09:30.058 ************************************ 00:09:30.058 13:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:30.058 13:03:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61346 00:09:30.058 13:03:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61346 /var/tmp/spdk.sock 00:09:30.058 13:03:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:30.058 13:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61346 ']' 00:09:30.058 13:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.058 13:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.058 13:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.058 13:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.058 13:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.318 [2024-12-06 13:03:18.723724] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:30.318 [2024-12-06 13:03:18.723848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61346 ] 00:09:30.318 [2024-12-06 13:03:18.873580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.318 [2024-12-06 13:03:18.938565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.885 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.885 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:30.885 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61360 00:09:30.885 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61360 /var/tmp/spdk2.sock 00:09:30.885 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:30.885 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:30.885 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61360 /var/tmp/spdk2.sock 00:09:30.885 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:30.885 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.885 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:30.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:30.885 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.886 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61360 /var/tmp/spdk2.sock 00:09:30.886 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61360 ']' 00:09:30.886 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:30.886 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.886 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:30.886 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.886 13:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.886 [2024-12-06 13:03:19.311871] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:30.886 [2024-12-06 13:03:19.311979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61360 ] 00:09:30.886 [2024-12-06 13:03:19.473270] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61346 has claimed it. 00:09:30.886 [2024-12-06 13:03:19.473353] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:31.452 ERROR: process (pid: 61360) is no longer running 00:09:31.452 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61360) - No such process 00:09:31.452 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.452 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:31.452 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:31.452 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.452 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:31.452 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.452 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61346 00:09:31.452 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61346 00:09:31.452 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:32.019 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61346 00:09:32.019 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61346 ']' 00:09:32.019 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61346 00:09:32.019 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:32.019 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.019 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61346 00:09:32.019 killing process with pid 61346 00:09:32.019 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.019 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.019 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61346' 00:09:32.019 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61346 00:09:32.019 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61346 00:09:32.277 00:09:32.277 real 0m2.199s 00:09:32.277 user 0m2.483s 00:09:32.277 sys 0m0.600s 00:09:32.277 ************************************ 00:09:32.277 END TEST locking_app_on_locked_coremask 00:09:32.277 ************************************ 00:09:32.277 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.277 13:03:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.277 13:03:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:32.277 13:03:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.277 13:03:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.277 13:03:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:32.277 ************************************ 00:09:32.277 START TEST locking_overlapped_coremask 00:09:32.277 ************************************ 00:09:32.277 13:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:32.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.277 13:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61412 00:09:32.277 13:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61412 /var/tmp/spdk.sock 00:09:32.277 13:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:32.277 13:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61412 ']' 00:09:32.277 13:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.277 13:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.277 13:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.277 13:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.277 13:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.535 [2024-12-06 13:03:20.959280] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:32.535 [2024-12-06 13:03:20.959580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61412 ] 00:09:32.535 [2024-12-06 13:03:21.100975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.535 [2024-12-06 13:03:21.171090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.536 [2024-12-06 13:03:21.171173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.536 [2024-12-06 13:03:21.171179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61434 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61434 /var/tmp/spdk2.sock 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61434 /var/tmp/spdk2.sock 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61434 /var/tmp/spdk2.sock 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61434 ']' 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.103 13:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:33.103 [2024-12-06 13:03:21.547787] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:33.103 [2024-12-06 13:03:21.547909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61434 ] 00:09:33.103 [2024-12-06 13:03:21.714364] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61412 has claimed it. 00:09:33.103 [2024-12-06 13:03:21.714444] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:33.667 ERROR: process (pid: 61434) is no longer running 00:09:33.667 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61434) - No such process 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61412 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61412 ']' 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61412 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.667 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61412 00:09:33.925 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.925 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.925 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61412' 00:09:33.925 killing process with pid 61412 00:09:33.925 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61412 00:09:33.925 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61412 00:09:34.184 00:09:34.184 real 0m1.805s 00:09:34.184 user 0m4.897s 00:09:34.184 sys 0m0.442s 00:09:34.184 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.184 13:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:34.184 ************************************ 00:09:34.184 END TEST locking_overlapped_coremask 00:09:34.184 ************************************ 00:09:34.184 13:03:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:34.184 13:03:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.184 13:03:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.184 13:03:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:34.184 ************************************ 00:09:34.184 START TEST locking_overlapped_coremask_via_rpc 00:09:34.184 ************************************ 00:09:34.184 13:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:34.184 13:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61480 00:09:34.184 13:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61480 /var/tmp/spdk.sock 00:09:34.184 13:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:34.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.184 13:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61480 ']' 00:09:34.184 13:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.184 13:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.184 13:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.184 13:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.184 13:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.483 [2024-12-06 13:03:22.843217] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:34.483 [2024-12-06 13:03:22.843811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61480 ] 00:09:34.483 [2024-12-06 13:03:22.992852] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:34.483 [2024-12-06 13:03:22.993201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:34.483 [2024-12-06 13:03:23.061126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.483 [2024-12-06 13:03:23.061229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.483 [2024-12-06 13:03:23.061234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.741 13:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.741 13:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:34.741 13:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:34.741 13:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61496 00:09:34.741 13:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61496 /var/tmp/spdk2.sock 00:09:34.741 13:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61496 ']' 00:09:34.741 13:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:34.741 13:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.741 13:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:34.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:34.741 13:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.741 13:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.999 [2024-12-06 13:03:23.419914] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:34.999 [2024-12-06 13:03:23.420250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61496 ] 00:09:34.999 [2024-12-06 13:03:23.583167] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:34.999 [2024-12-06 13:03:23.583232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:35.257 [2024-12-06 13:03:23.719385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.257 [2024-12-06 13:03:23.719562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:35.257 [2024-12-06 13:03:23.719564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.190 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.190 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.191 [2024-12-06 13:03:24.479632] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61480 has claimed it. 00:09:36.191 2024/12/06 13:03:24 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:09:36.191 request: 00:09:36.191 { 00:09:36.191 "method": "framework_enable_cpumask_locks", 00:09:36.191 "params": {} 00:09:36.191 } 00:09:36.191 Got JSON-RPC error response 00:09:36.191 GoRPCClient: error on JSON-RPC call 00:09:36.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61480 /var/tmp/spdk.sock 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61480 ']' 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61496 /var/tmp/spdk2.sock 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61496 ']' 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.191 13:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.757 ************************************ 00:09:36.757 END TEST locking_overlapped_coremask_via_rpc 00:09:36.757 ************************************ 00:09:36.757 13:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.757 13:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:36.757 13:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:36.757 13:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:36.757 13:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:36.757 13:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:36.757 00:09:36.757 real 0m2.486s 00:09:36.757 user 0m1.455s 00:09:36.757 sys 0m0.209s 00:09:36.757 13:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.757 13:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.757 13:03:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:36.757 13:03:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61480 ]] 00:09:36.757 13:03:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61480 00:09:36.757 13:03:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61480 ']' 00:09:36.757 13:03:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61480 00:09:36.757 13:03:25 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:36.757 13:03:25 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.757 13:03:25 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61480 00:09:36.757 killing process with pid 61480 00:09:36.757 13:03:25 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.757 13:03:25 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.757 13:03:25 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61480' 00:09:36.757 13:03:25 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61480 00:09:36.757 13:03:25 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61480 00:09:37.324 13:03:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61496 ]] 00:09:37.324 13:03:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61496 00:09:37.324 13:03:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61496 ']' 00:09:37.324 13:03:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61496 00:09:37.324 13:03:25 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:37.324 13:03:25 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.324 13:03:25 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61496 00:09:37.324 13:03:25 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:37.324 13:03:25 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:37.324 13:03:25 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61496' 00:09:37.324 killing process with pid 61496 00:09:37.324 13:03:25 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61496 00:09:37.324 13:03:25 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61496 00:09:37.582 13:03:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:37.582 Process with pid 61480 is not found 00:09:37.582 Process with pid 61496 is not found 00:09:37.582 13:03:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:37.582 13:03:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61480 ]] 00:09:37.582 13:03:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61480 00:09:37.582 13:03:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61480 ']' 00:09:37.582 13:03:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61480 00:09:37.582 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61480) - No such process 00:09:37.583 13:03:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61480 is not found' 00:09:37.583 13:03:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61496 ]] 00:09:37.583 13:03:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61496 00:09:37.583 13:03:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61496 ']' 00:09:37.583 13:03:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61496 00:09:37.583 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61496) - No such process 00:09:37.583 13:03:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61496 is not found' 00:09:37.583 13:03:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:37.583 ************************************ 00:09:37.583 END TEST cpu_locks 00:09:37.583 ************************************ 00:09:37.583 00:09:37.583 real 0m19.871s 00:09:37.583 user 0m34.847s 00:09:37.583 sys 0m5.669s 00:09:37.583 13:03:26 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.583 13:03:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:37.583 ************************************ 00:09:37.583 END TEST event 00:09:37.583 ************************************ 00:09:37.583 00:09:37.583 real 0m46.660s 00:09:37.583 user 1m29.847s 00:09:37.583 sys 0m9.739s 00:09:37.583 13:03:26 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.583 13:03:26 event -- common/autotest_common.sh@10 -- # set +x 00:09:37.583 13:03:26 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:37.583 13:03:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.583 13:03:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.583 13:03:26 -- common/autotest_common.sh@10 -- # set +x 00:09:37.583 ************************************ 00:09:37.583 START TEST thread 00:09:37.583 ************************************ 00:09:37.583 13:03:26 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:37.841 * Looking for test storage... 00:09:37.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:37.841 13:03:26 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:37.841 13:03:26 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:37.841 13:03:26 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:37.841 13:03:26 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:37.841 13:03:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.841 13:03:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.841 13:03:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.841 13:03:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.841 13:03:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.841 13:03:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.841 13:03:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.841 13:03:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.841 13:03:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.841 13:03:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.841 13:03:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.841 13:03:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:37.841 13:03:26 thread -- scripts/common.sh@345 -- # : 1 00:09:37.841 13:03:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.841 13:03:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.841 13:03:26 thread -- scripts/common.sh@365 -- # decimal 1 00:09:37.841 13:03:26 thread -- scripts/common.sh@353 -- # local d=1 00:09:37.841 13:03:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.841 13:03:26 thread -- scripts/common.sh@355 -- # echo 1 00:09:37.841 13:03:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.841 13:03:26 thread -- scripts/common.sh@366 -- # decimal 2 00:09:37.841 13:03:26 thread -- scripts/common.sh@353 -- # local d=2 00:09:37.841 13:03:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.841 13:03:26 thread -- scripts/common.sh@355 -- # echo 2 00:09:37.841 13:03:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.841 13:03:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.841 13:03:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.841 13:03:26 thread -- scripts/common.sh@368 -- # return 0 00:09:37.841 13:03:26 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.841 13:03:26 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:37.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.841 --rc genhtml_branch_coverage=1 00:09:37.841 --rc genhtml_function_coverage=1 00:09:37.841 --rc genhtml_legend=1 00:09:37.841 --rc geninfo_all_blocks=1 00:09:37.841 --rc geninfo_unexecuted_blocks=1 00:09:37.841 00:09:37.841 ' 00:09:37.841 13:03:26 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:37.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.841 --rc genhtml_branch_coverage=1 00:09:37.841 --rc genhtml_function_coverage=1 00:09:37.841 --rc genhtml_legend=1 00:09:37.841 --rc geninfo_all_blocks=1 00:09:37.841 --rc geninfo_unexecuted_blocks=1 00:09:37.841 00:09:37.841 ' 00:09:37.841 13:03:26 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:37.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.842 --rc genhtml_branch_coverage=1 00:09:37.842 --rc genhtml_function_coverage=1 00:09:37.842 --rc genhtml_legend=1 00:09:37.842 --rc geninfo_all_blocks=1 00:09:37.842 --rc geninfo_unexecuted_blocks=1 00:09:37.842 00:09:37.842 ' 00:09:37.842 13:03:26 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:37.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.842 --rc genhtml_branch_coverage=1 00:09:37.842 --rc genhtml_function_coverage=1 00:09:37.842 --rc genhtml_legend=1 00:09:37.842 --rc geninfo_all_blocks=1 00:09:37.842 --rc geninfo_unexecuted_blocks=1 00:09:37.842 00:09:37.842 ' 00:09:37.842 13:03:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:37.842 13:03:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:37.842 13:03:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.842 13:03:26 thread -- common/autotest_common.sh@10 -- # set +x 00:09:37.842 ************************************ 00:09:37.842 START TEST thread_poller_perf 00:09:37.842 ************************************ 00:09:37.842 13:03:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:37.842 [2024-12-06 13:03:26.414952] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:37.842 [2024-12-06 13:03:26.415241] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61658 ] 00:09:38.100 [2024-12-06 13:03:26.566218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.100 [2024-12-06 13:03:26.629271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.100 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:39.475 [2024-12-06T13:03:28.115Z] ====================================== 00:09:39.475 [2024-12-06T13:03:28.115Z] busy:2213680470 (cyc) 00:09:39.475 [2024-12-06T13:03:28.115Z] total_run_count: 305000 00:09:39.475 [2024-12-06T13:03:28.115Z] tsc_hz: 2200000000 (cyc) 00:09:39.475 [2024-12-06T13:03:28.115Z] ====================================== 00:09:39.475 [2024-12-06T13:03:28.115Z] poller_cost: 7257 (cyc), 3298 (nsec) 00:09:39.475 00:09:39.475 real 0m1.296s 00:09:39.475 user 0m1.139s 00:09:39.475 sys 0m0.049s 00:09:39.475 13:03:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.475 13:03:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:39.475 ************************************ 00:09:39.475 END TEST thread_poller_perf 00:09:39.475 ************************************ 00:09:39.475 13:03:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:39.475 13:03:27 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:39.475 13:03:27 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.475 13:03:27 thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.475 ************************************ 00:09:39.475 START TEST thread_poller_perf 00:09:39.475 ************************************ 00:09:39.476 13:03:27 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:39.476 [2024-12-06 13:03:27.767051] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:39.476 [2024-12-06 13:03:27.767308] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61694 ] 00:09:39.476 [2024-12-06 13:03:27.912345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.476 [2024-12-06 13:03:27.955221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.476 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:40.411 [2024-12-06T13:03:29.051Z] ====================================== 00:09:40.411 [2024-12-06T13:03:29.051Z] busy:2201968606 (cyc) 00:09:40.411 [2024-12-06T13:03:29.051Z] total_run_count: 3800000 00:09:40.411 [2024-12-06T13:03:29.051Z] tsc_hz: 2200000000 (cyc) 00:09:40.411 [2024-12-06T13:03:29.051Z] ====================================== 00:09:40.411 [2024-12-06T13:03:29.051Z] poller_cost: 579 (cyc), 263 (nsec) 00:09:40.411 00:09:40.411 real 0m1.264s 00:09:40.411 user 0m1.113s 00:09:40.411 sys 0m0.044s 00:09:40.411 13:03:29 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.411 13:03:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:40.411 ************************************ 00:09:40.411 END TEST thread_poller_perf 00:09:40.411 ************************************ 00:09:40.670 13:03:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:40.670 ************************************ 00:09:40.670 END TEST thread 00:09:40.670 ************************************ 00:09:40.670 00:09:40.670 real 0m2.853s 00:09:40.670 user 0m2.379s 00:09:40.670 sys 0m0.255s 00:09:40.670 13:03:29 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.670 13:03:29 thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.670 13:03:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:40.670 13:03:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:40.670 13:03:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.670 13:03:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.670 13:03:29 -- common/autotest_common.sh@10 -- # set +x 00:09:40.670 ************************************ 00:09:40.670 START TEST app_cmdline 00:09:40.670 ************************************ 00:09:40.670 13:03:29 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:40.670 * Looking for test storage... 00:09:40.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:40.670 13:03:29 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.670 13:03:29 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.670 13:03:29 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.670 13:03:29 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.670 13:03:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:40.928 13:03:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:40.928 13:03:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.928 13:03:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:40.928 13:03:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.928 13:03:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.928 13:03:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.928 13:03:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:40.928 13:03:29 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.928 13:03:29 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.928 --rc genhtml_branch_coverage=1 00:09:40.928 --rc genhtml_function_coverage=1 00:09:40.928 --rc genhtml_legend=1 00:09:40.928 --rc geninfo_all_blocks=1 00:09:40.928 --rc geninfo_unexecuted_blocks=1 00:09:40.928 00:09:40.928 ' 00:09:40.928 13:03:29 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.928 --rc genhtml_branch_coverage=1 00:09:40.928 --rc genhtml_function_coverage=1 00:09:40.928 --rc genhtml_legend=1 00:09:40.928 --rc geninfo_all_blocks=1 00:09:40.928 --rc geninfo_unexecuted_blocks=1 00:09:40.928 00:09:40.928 ' 00:09:40.928 13:03:29 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.928 --rc genhtml_branch_coverage=1 00:09:40.928 --rc genhtml_function_coverage=1 00:09:40.928 --rc genhtml_legend=1 00:09:40.928 --rc geninfo_all_blocks=1 00:09:40.928 --rc geninfo_unexecuted_blocks=1 00:09:40.928 00:09:40.928 ' 00:09:40.928 13:03:29 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.928 --rc genhtml_branch_coverage=1 00:09:40.928 --rc genhtml_function_coverage=1 00:09:40.928 --rc genhtml_legend=1 00:09:40.928 --rc geninfo_all_blocks=1 00:09:40.928 --rc geninfo_unexecuted_blocks=1 00:09:40.928 00:09:40.928 ' 00:09:40.928 13:03:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:40.928 13:03:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61771 00:09:40.928 13:03:29 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:40.928 13:03:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61771 00:09:40.928 13:03:29 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61771 ']' 00:09:40.928 13:03:29 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.928 13:03:29 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.928 13:03:29 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.928 13:03:29 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.928 13:03:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:40.928 [2024-12-06 13:03:29.394096] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:40.928 [2024-12-06 13:03:29.394479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61771 ] 00:09:40.928 [2024-12-06 13:03:29.552618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.187 [2024-12-06 13:03:29.610847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.445 13:03:29 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.445 13:03:29 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:41.445 13:03:29 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:41.704 { 00:09:41.704 "fields": { 00:09:41.704 "commit": "6696ebaae", 00:09:41.704 "major": 25, 00:09:41.704 "minor": 1, 00:09:41.704 "patch": 0, 00:09:41.704 "suffix": "-pre" 00:09:41.704 }, 00:09:41.704 "version": "SPDK v25.01-pre git sha1 6696ebaae" 00:09:41.704 } 00:09:41.704 13:03:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:41.704 13:03:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:41.704 13:03:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:41.704 13:03:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:41.704 13:03:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:41.704 13:03:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.704 13:03:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.704 13:03:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:41.704 13:03:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:41.704 13:03:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:41.704 13:03:30 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:41.963 2024/12/06 13:03:30 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:09:41.963 request: 00:09:41.963 { 00:09:41.963 "method": "env_dpdk_get_mem_stats", 00:09:41.963 "params": {} 00:09:41.963 } 00:09:41.963 Got JSON-RPC error response 00:09:41.963 GoRPCClient: error on JSON-RPC call 00:09:41.963 13:03:30 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:41.963 13:03:30 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:41.963 13:03:30 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:41.963 13:03:30 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:41.963 13:03:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61771 00:09:41.963 13:03:30 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61771 ']' 00:09:41.963 13:03:30 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61771 00:09:41.963 13:03:30 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:41.963 13:03:30 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.963 13:03:30 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61771 00:09:42.221 killing process with pid 61771 00:09:42.221 13:03:30 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.221 13:03:30 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.221 13:03:30 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61771' 00:09:42.221 13:03:30 app_cmdline -- common/autotest_common.sh@973 -- # kill 61771 00:09:42.221 13:03:30 app_cmdline -- common/autotest_common.sh@978 -- # wait 61771 00:09:42.480 00:09:42.480 real 0m1.896s 00:09:42.480 user 0m2.286s 00:09:42.480 sys 0m0.534s 00:09:42.480 13:03:31 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.480 13:03:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:42.480 ************************************ 00:09:42.480 END TEST app_cmdline 00:09:42.480 ************************************ 00:09:42.480 13:03:31 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:42.480 13:03:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.480 13:03:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.480 13:03:31 -- common/autotest_common.sh@10 -- # set +x 00:09:42.480 ************************************ 00:09:42.480 START TEST version 00:09:42.480 ************************************ 00:09:42.480 13:03:31 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:42.740 * Looking for test storage... 00:09:42.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:42.740 13:03:31 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:42.740 13:03:31 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:42.740 13:03:31 version -- common/autotest_common.sh@1711 -- # lcov --version 00:09:42.740 13:03:31 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:42.740 13:03:31 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.740 13:03:31 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.740 13:03:31 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.740 13:03:31 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.740 13:03:31 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.740 13:03:31 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.740 13:03:31 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.740 13:03:31 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.740 13:03:31 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.740 13:03:31 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.740 13:03:31 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.740 13:03:31 version -- scripts/common.sh@344 -- # case "$op" in 00:09:42.740 13:03:31 version -- scripts/common.sh@345 -- # : 1 00:09:42.740 13:03:31 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.740 13:03:31 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.740 13:03:31 version -- scripts/common.sh@365 -- # decimal 1 00:09:42.740 13:03:31 version -- scripts/common.sh@353 -- # local d=1 00:09:42.740 13:03:31 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.740 13:03:31 version -- scripts/common.sh@355 -- # echo 1 00:09:42.740 13:03:31 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.740 13:03:31 version -- scripts/common.sh@366 -- # decimal 2 00:09:42.740 13:03:31 version -- scripts/common.sh@353 -- # local d=2 00:09:42.740 13:03:31 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.740 13:03:31 version -- scripts/common.sh@355 -- # echo 2 00:09:42.740 13:03:31 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.740 13:03:31 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.740 13:03:31 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.740 13:03:31 version -- scripts/common.sh@368 -- # return 0 00:09:42.740 13:03:31 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.740 13:03:31 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:42.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.740 --rc genhtml_branch_coverage=1 00:09:42.740 --rc genhtml_function_coverage=1 00:09:42.740 --rc genhtml_legend=1 00:09:42.740 --rc geninfo_all_blocks=1 00:09:42.740 --rc geninfo_unexecuted_blocks=1 00:09:42.740 00:09:42.740 ' 00:09:42.740 13:03:31 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:42.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.740 --rc genhtml_branch_coverage=1 00:09:42.740 --rc genhtml_function_coverage=1 00:09:42.740 --rc genhtml_legend=1 00:09:42.740 --rc geninfo_all_blocks=1 00:09:42.740 --rc geninfo_unexecuted_blocks=1 00:09:42.740 00:09:42.740 ' 00:09:42.740 13:03:31 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:42.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.740 --rc genhtml_branch_coverage=1 00:09:42.740 --rc genhtml_function_coverage=1 00:09:42.740 --rc genhtml_legend=1 00:09:42.740 --rc geninfo_all_blocks=1 00:09:42.740 --rc geninfo_unexecuted_blocks=1 00:09:42.740 00:09:42.740 ' 00:09:42.740 13:03:31 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:42.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.740 --rc genhtml_branch_coverage=1 00:09:42.740 --rc genhtml_function_coverage=1 00:09:42.740 --rc genhtml_legend=1 00:09:42.740 --rc geninfo_all_blocks=1 00:09:42.740 --rc geninfo_unexecuted_blocks=1 00:09:42.740 00:09:42.740 ' 00:09:42.740 13:03:31 version -- app/version.sh@17 -- # get_header_version major 00:09:42.740 13:03:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:42.740 13:03:31 version -- app/version.sh@14 -- # cut -f2 00:09:42.740 13:03:31 version -- app/version.sh@14 -- # tr -d '"' 00:09:42.740 13:03:31 version -- app/version.sh@17 -- # major=25 00:09:42.740 13:03:31 version -- app/version.sh@18 -- # get_header_version minor 00:09:42.740 13:03:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:42.740 13:03:31 version -- app/version.sh@14 -- # cut -f2 00:09:42.740 13:03:31 version -- app/version.sh@14 -- # tr -d '"' 00:09:42.741 13:03:31 version -- app/version.sh@18 -- # minor=1 00:09:42.741 13:03:31 version -- app/version.sh@19 -- # get_header_version patch 00:09:42.741 13:03:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:42.741 13:03:31 version -- app/version.sh@14 -- # cut -f2 00:09:42.741 13:03:31 version -- app/version.sh@14 -- # tr -d '"' 00:09:42.741 13:03:31 version -- app/version.sh@19 -- # patch=0 00:09:42.741 13:03:31 version -- app/version.sh@20 -- # get_header_version suffix 00:09:42.741 13:03:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:42.741 13:03:31 version -- app/version.sh@14 -- # cut -f2 00:09:42.741 13:03:31 version -- app/version.sh@14 -- # tr -d '"' 00:09:42.741 13:03:31 version -- app/version.sh@20 -- # suffix=-pre 00:09:42.741 13:03:31 version -- app/version.sh@22 -- # version=25.1 00:09:42.741 13:03:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:42.741 13:03:31 version -- app/version.sh@28 -- # version=25.1rc0 00:09:42.741 13:03:31 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:42.741 13:03:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:42.741 13:03:31 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:42.741 13:03:31 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:42.741 00:09:42.741 real 0m0.266s 00:09:42.741 user 0m0.165s 00:09:42.741 sys 0m0.133s 00:09:42.741 13:03:31 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.741 13:03:31 version -- common/autotest_common.sh@10 -- # set +x 00:09:42.741 ************************************ 00:09:42.741 END TEST version 00:09:42.741 ************************************ 00:09:42.741 13:03:31 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:42.741 13:03:31 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:42.741 13:03:31 -- spdk/autotest.sh@194 -- # uname -s 00:09:42.741 13:03:31 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:42.741 13:03:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:42.741 13:03:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:42.741 13:03:31 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:42.741 13:03:31 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:42.741 13:03:31 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:42.741 13:03:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.741 13:03:31 -- common/autotest_common.sh@10 -- # set +x 00:09:43.000 13:03:31 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:43.000 13:03:31 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:43.000 13:03:31 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:43.000 13:03:31 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:43.000 13:03:31 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:43.000 13:03:31 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:43.000 13:03:31 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:43.000 13:03:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.000 13:03:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.000 13:03:31 -- common/autotest_common.sh@10 -- # set +x 00:09:43.000 ************************************ 00:09:43.000 START TEST nvmf_tcp 00:09:43.000 ************************************ 00:09:43.000 13:03:31 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:43.000 * Looking for test storage... 00:09:43.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:43.000 13:03:31 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:43.000 13:03:31 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:43.000 13:03:31 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:43.000 13:03:31 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.000 13:03:31 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:43.000 13:03:31 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.000 13:03:31 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:43.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.000 --rc genhtml_branch_coverage=1 00:09:43.000 --rc genhtml_function_coverage=1 00:09:43.000 --rc genhtml_legend=1 00:09:43.000 --rc geninfo_all_blocks=1 00:09:43.000 --rc geninfo_unexecuted_blocks=1 00:09:43.000 00:09:43.000 ' 00:09:43.000 13:03:31 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:43.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.000 --rc genhtml_branch_coverage=1 00:09:43.000 --rc genhtml_function_coverage=1 00:09:43.000 --rc genhtml_legend=1 00:09:43.000 --rc geninfo_all_blocks=1 00:09:43.000 --rc geninfo_unexecuted_blocks=1 00:09:43.000 00:09:43.000 ' 00:09:43.000 13:03:31 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:43.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.000 --rc genhtml_branch_coverage=1 00:09:43.000 --rc genhtml_function_coverage=1 00:09:43.000 --rc genhtml_legend=1 00:09:43.000 --rc geninfo_all_blocks=1 00:09:43.000 --rc geninfo_unexecuted_blocks=1 00:09:43.000 00:09:43.000 ' 00:09:43.000 13:03:31 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:43.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.000 --rc genhtml_branch_coverage=1 00:09:43.000 --rc genhtml_function_coverage=1 00:09:43.000 --rc genhtml_legend=1 00:09:43.000 --rc geninfo_all_blocks=1 00:09:43.000 --rc geninfo_unexecuted_blocks=1 00:09:43.000 00:09:43.000 ' 00:09:43.000 13:03:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:43.000 13:03:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:43.000 13:03:31 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:43.000 13:03:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.000 13:03:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.000 13:03:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:43.000 ************************************ 00:09:43.000 START TEST nvmf_target_core 00:09:43.000 ************************************ 00:09:43.000 13:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:43.260 * Looking for test storage... 00:09:43.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:43.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.260 --rc genhtml_branch_coverage=1 00:09:43.260 --rc genhtml_function_coverage=1 00:09:43.260 --rc genhtml_legend=1 00:09:43.260 --rc geninfo_all_blocks=1 00:09:43.260 --rc geninfo_unexecuted_blocks=1 00:09:43.260 00:09:43.260 ' 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:43.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.260 --rc genhtml_branch_coverage=1 00:09:43.260 --rc genhtml_function_coverage=1 00:09:43.260 --rc genhtml_legend=1 00:09:43.260 --rc geninfo_all_blocks=1 00:09:43.260 --rc geninfo_unexecuted_blocks=1 00:09:43.260 00:09:43.260 ' 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:43.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.260 --rc genhtml_branch_coverage=1 00:09:43.260 --rc genhtml_function_coverage=1 00:09:43.260 --rc genhtml_legend=1 00:09:43.260 --rc geninfo_all_blocks=1 00:09:43.260 --rc geninfo_unexecuted_blocks=1 00:09:43.260 00:09:43.260 ' 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:43.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.260 --rc genhtml_branch_coverage=1 00:09:43.260 --rc genhtml_function_coverage=1 00:09:43.260 --rc genhtml_legend=1 00:09:43.260 --rc geninfo_all_blocks=1 00:09:43.260 --rc geninfo_unexecuted_blocks=1 00:09:43.260 00:09:43.260 ' 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.260 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.261 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.261 ************************************ 00:09:43.261 START TEST nvmf_abort 00:09:43.261 ************************************ 00:09:43.261 13:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:43.521 * Looking for test storage... 00:09:43.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:43.521 13:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:43.521 13:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:43.521 13:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:43.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.521 --rc genhtml_branch_coverage=1 00:09:43.521 --rc genhtml_function_coverage=1 00:09:43.521 --rc genhtml_legend=1 00:09:43.521 --rc geninfo_all_blocks=1 00:09:43.521 --rc geninfo_unexecuted_blocks=1 00:09:43.521 00:09:43.521 ' 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:43.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.521 --rc genhtml_branch_coverage=1 00:09:43.521 --rc genhtml_function_coverage=1 00:09:43.521 --rc genhtml_legend=1 00:09:43.521 --rc geninfo_all_blocks=1 00:09:43.521 --rc geninfo_unexecuted_blocks=1 00:09:43.521 00:09:43.521 ' 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:43.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.521 --rc genhtml_branch_coverage=1 00:09:43.521 --rc genhtml_function_coverage=1 00:09:43.521 --rc genhtml_legend=1 00:09:43.521 --rc geninfo_all_blocks=1 00:09:43.521 --rc geninfo_unexecuted_blocks=1 00:09:43.521 00:09:43.521 ' 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:43.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.521 --rc genhtml_branch_coverage=1 00:09:43.521 --rc genhtml_function_coverage=1 00:09:43.521 --rc genhtml_legend=1 00:09:43.521 --rc geninfo_all_blocks=1 00:09:43.521 --rc geninfo_unexecuted_blocks=1 00:09:43.521 00:09:43.521 ' 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.521 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.522 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:43.522 Cannot find device "nvmf_init_br" 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:43.522 Cannot find device "nvmf_init_br2" 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:43.522 Cannot find device "nvmf_tgt_br" 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.522 Cannot find device "nvmf_tgt_br2" 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:43.522 Cannot find device "nvmf_init_br" 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:43.522 Cannot find device "nvmf_init_br2" 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:43.522 Cannot find device "nvmf_tgt_br" 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:43.522 Cannot find device "nvmf_tgt_br2" 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:09:43.522 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:43.781 Cannot find device "nvmf_br" 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:43.781 Cannot find device "nvmf_init_if" 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:43.781 Cannot find device "nvmf_init_if2" 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:43.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:43.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:43.781 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:44.039 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:44.039 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.387 ms 00:09:44.039 00:09:44.039 --- 10.0.0.3 ping statistics --- 00:09:44.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.039 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:44.039 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:44.039 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:09:44.039 00:09:44.039 --- 10.0.0.4 ping statistics --- 00:09:44.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.039 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:44.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:44.039 00:09:44.039 --- 10.0.0.1 ping statistics --- 00:09:44.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.039 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:44.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:09:44.039 00:09:44.039 --- 10.0.0.2 ping statistics --- 00:09:44.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.039 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=62196 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 62196 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 62196 ']' 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.039 13:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:44.039 [2024-12-06 13:03:32.657269] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:44.040 [2024-12-06 13:03:32.657392] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.297 [2024-12-06 13:03:32.813231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:44.297 [2024-12-06 13:03:32.878623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.297 [2024-12-06 13:03:32.878689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.297 [2024-12-06 13:03:32.878703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.297 [2024-12-06 13:03:32.878714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.297 [2024-12-06 13:03:32.878723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.297 [2024-12-06 13:03:32.880070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.297 [2024-12-06 13:03:32.880228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.297 [2024-12-06 13:03:32.880237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:44.555 [2024-12-06 13:03:33.055071] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:44.555 Malloc0 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:44.555 Delay0 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:44.555 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.556 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:44.556 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.556 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:44.556 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.556 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:44.556 [2024-12-06 13:03:33.128973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:44.556 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.556 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:44.556 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.556 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:44.556 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.556 13:03:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:44.813 [2024-12-06 13:03:33.319250] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:46.717 Initializing NVMe Controllers 00:09:46.717 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:46.717 controller IO queue size 128 less than required 00:09:46.717 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:46.717 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:46.717 Initialization complete. Launching workers. 00:09:46.717 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27179 00:09:46.717 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27240, failed to submit 62 00:09:46.717 success 27183, unsuccessful 57, failed 0 00:09:46.717 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:46.717 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.717 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.974 rmmod nvme_tcp 00:09:46.974 rmmod nvme_fabrics 00:09:46.974 rmmod nvme_keyring 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 62196 ']' 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 62196 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 62196 ']' 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 62196 00:09:46.974 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:09:46.975 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.975 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62196 00:09:46.975 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:46.975 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:46.975 killing process with pid 62196 00:09:46.975 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62196' 00:09:46.975 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 62196 00:09:46.975 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 62196 00:09:47.231 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.231 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:47.232 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:47.489 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:47.489 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.489 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.489 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:47.489 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.489 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.489 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.489 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:09:47.489 00:09:47.489 real 0m4.130s 00:09:47.489 user 0m10.453s 00:09:47.489 sys 0m1.102s 00:09:47.489 ************************************ 00:09:47.489 END TEST nvmf_abort 00:09:47.489 ************************************ 00:09:47.489 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.489 13:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.489 13:03:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:47.489 13:03:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.489 13:03:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.489 13:03:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.489 ************************************ 00:09:47.489 START TEST nvmf_ns_hotplug_stress 00:09:47.489 ************************************ 00:09:47.489 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:47.489 * Looking for test storage... 00:09:47.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:47.489 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:47.489 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:09:47.489 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:47.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.747 --rc genhtml_branch_coverage=1 00:09:47.747 --rc genhtml_function_coverage=1 00:09:47.747 --rc genhtml_legend=1 00:09:47.747 --rc geninfo_all_blocks=1 00:09:47.747 --rc geninfo_unexecuted_blocks=1 00:09:47.747 00:09:47.747 ' 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:47.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.747 --rc genhtml_branch_coverage=1 00:09:47.747 --rc genhtml_function_coverage=1 00:09:47.747 --rc genhtml_legend=1 00:09:47.747 --rc geninfo_all_blocks=1 00:09:47.747 --rc geninfo_unexecuted_blocks=1 00:09:47.747 00:09:47.747 ' 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:47.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.747 --rc genhtml_branch_coverage=1 00:09:47.747 --rc genhtml_function_coverage=1 00:09:47.747 --rc genhtml_legend=1 00:09:47.747 --rc geninfo_all_blocks=1 00:09:47.747 --rc geninfo_unexecuted_blocks=1 00:09:47.747 00:09:47.747 ' 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:47.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.747 --rc genhtml_branch_coverage=1 00:09:47.747 --rc genhtml_function_coverage=1 00:09:47.747 --rc genhtml_legend=1 00:09:47.747 --rc geninfo_all_blocks=1 00:09:47.747 --rc geninfo_unexecuted_blocks=1 00:09:47.747 00:09:47.747 ' 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.747 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.747 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:47.748 Cannot find device "nvmf_init_br" 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:47.748 Cannot find device "nvmf_init_br2" 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:47.748 Cannot find device "nvmf_tgt_br" 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.748 Cannot find device "nvmf_tgt_br2" 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:47.748 Cannot find device "nvmf_init_br" 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:47.748 Cannot find device "nvmf_init_br2" 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:47.748 Cannot find device "nvmf_tgt_br" 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:47.748 Cannot find device "nvmf_tgt_br2" 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:47.748 Cannot find device "nvmf_br" 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:47.748 Cannot find device "nvmf_init_if" 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:47.748 Cannot find device "nvmf_init_if2" 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.748 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:48.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:48.006 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:48.007 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:48.007 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:09:48.007 00:09:48.007 --- 10.0.0.3 ping statistics --- 00:09:48.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.007 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:48.007 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:48.007 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:09:48.007 00:09:48.007 --- 10.0.0.4 ping statistics --- 00:09:48.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.007 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:48.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:48.007 00:09:48.007 --- 10.0.0.1 ping statistics --- 00:09:48.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.007 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:48.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:09:48.007 00:09:48.007 --- 10.0.0.2 ping statistics --- 00:09:48.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.007 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:48.007 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:48.265 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:48.265 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:48.265 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:48.265 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:48.265 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=62475 00:09:48.265 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:48.265 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 62475 00:09:48.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.265 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 62475 ']' 00:09:48.265 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.265 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.265 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.265 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.265 13:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:48.265 [2024-12-06 13:03:36.719696] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:48.265 [2024-12-06 13:03:36.719970] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.265 [2024-12-06 13:03:36.873351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:48.524 [2024-12-06 13:03:36.943191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.524 [2024-12-06 13:03:36.943506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.524 [2024-12-06 13:03:36.943668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.524 [2024-12-06 13:03:36.943824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.524 [2024-12-06 13:03:36.943870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.524 [2024-12-06 13:03:36.945275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.524 [2024-12-06 13:03:36.945372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.524 [2024-12-06 13:03:36.945378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.524 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.524 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:09:48.524 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:48.524 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:48.524 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:48.524 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.524 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:48.524 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:48.828 [2024-12-06 13:03:37.404503] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.828 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:49.086 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:49.653 [2024-12-06 13:03:38.013281] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:49.653 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:49.912 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:50.170 Malloc0 00:09:50.170 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:50.429 Delay0 00:09:50.429 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.687 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:50.947 NULL1 00:09:50.947 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:51.206 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:51.206 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62598 00:09:51.206 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:09:51.206 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.583 Read completed with error (sct=0, sc=11) 00:09:52.583 13:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.841 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:52.841 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:53.099 true 00:09:53.099 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:09:53.099 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.037 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.037 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:54.037 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:54.296 true 00:09:54.296 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:09:54.296 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.555 13:03:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.814 13:03:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:54.814 13:03:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:55.383 true 00:09:55.383 13:03:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:09:55.383 13:03:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.642 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.902 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:55.902 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:56.163 true 00:09:56.163 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:09:56.163 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.729 13:03:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.294 13:03:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:57.294 13:03:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:57.294 true 00:09:57.294 13:03:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:09:57.294 13:03:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.551 13:03:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.117 13:03:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:58.117 13:03:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:58.117 true 00:09:58.117 13:03:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:09:58.117 13:03:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.374 13:03:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.631 13:03:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:58.631 13:03:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:58.888 true 00:09:58.888 13:03:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:09:58.888 13:03:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.820 13:03:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.078 13:03:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:00.078 13:03:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:00.337 true 00:10:00.337 13:03:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:00.337 13:03:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.596 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.854 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:00.854 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:01.113 true 00:10:01.371 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:01.371 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.371 13:03:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.630 13:03:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:01.630 13:03:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:01.888 true 00:10:02.146 13:03:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:02.146 13:03:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.081 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.081 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:03.081 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:03.340 true 00:10:03.340 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:03.340 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.597 13:03:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.854 13:03:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:03.854 13:03:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:04.173 true 00:10:04.173 13:03:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:04.173 13:03:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.447 13:03:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.706 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:04.706 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:04.964 true 00:10:04.965 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:04.965 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.899 13:03:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.157 13:03:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:06.157 13:03:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:06.417 true 00:10:06.417 13:03:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:06.417 13:03:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.674 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.240 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:07.240 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:07.498 true 00:10:07.498 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:07.498 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.756 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.014 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:08.014 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:08.272 true 00:10:08.272 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:08.272 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.531 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.789 13:03:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:08.789 13:03:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:09.047 true 00:10:09.047 13:03:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:09.047 13:03:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.982 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.239 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:10.239 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:10.496 true 00:10:10.496 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:10.496 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.753 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.010 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:11.010 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:11.268 true 00:10:11.268 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:11.268 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.525 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.783 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:11.783 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:12.041 true 00:10:12.041 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:12.041 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.976 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.234 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:13.234 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:13.491 true 00:10:13.491 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:13.491 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.749 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.007 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:14.007 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:14.321 true 00:10:14.321 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:14.321 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.579 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.838 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:14.838 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:15.097 true 00:10:15.097 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:15.097 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.031 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.291 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:16.291 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:16.550 true 00:10:16.550 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:16.550 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.808 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.066 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:17.066 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:17.324 true 00:10:17.324 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:17.324 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.889 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.148 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:18.148 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:18.406 true 00:10:18.406 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:18.406 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.665 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.923 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:18.923 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:19.181 true 00:10:19.181 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:19.181 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:20.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:20.707 true 00:10:20.707 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:20.707 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.965 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.224 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:21.224 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:21.482 true 00:10:21.482 Initializing NVMe Controllers 00:10:21.482 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:21.482 Controller IO queue size 128, less than required. 00:10:21.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:21.482 Controller IO queue size 128, less than required. 00:10:21.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:21.482 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:21.482 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:21.482 Initialization complete. Launching workers. 00:10:21.482 ======================================================== 00:10:21.482 Latency(us) 00:10:21.482 Device Information : IOPS MiB/s Average min max 00:10:21.482 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 375.67 0.18 125191.59 3453.83 1137284.37 00:10:21.482 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7485.16 3.65 17100.35 2946.39 645422.21 00:10:21.482 ======================================================== 00:10:21.482 Total : 7860.84 3.84 22266.10 2946.39 1137284.37 00:10:21.482 00:10:21.482 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:21.482 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.740 13:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.998 13:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:21.998 13:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:22.256 true 00:10:22.256 13:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62598 00:10:22.256 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62598) - No such process 00:10:22.256 13:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62598 00:10:22.256 13:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.513 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:22.773 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:22.773 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:22.773 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:22.773 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:22.773 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:23.340 null0 00:10:23.340 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:23.340 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:23.340 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:23.340 null1 00:10:23.340 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:23.340 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:23.340 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:23.597 null2 00:10:23.854 13:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:23.854 13:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:23.854 13:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:24.111 null3 00:10:24.111 13:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.111 13:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.111 13:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:24.369 null4 00:10:24.369 13:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.369 13:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.369 13:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:24.627 null5 00:10:24.627 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.627 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.627 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:24.886 null6 00:10:24.886 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.886 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.886 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:25.203 null7 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.203 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63673 63674 63676 63677 63679 63682 63684 63687 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.204 13:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:25.463 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:25.463 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:25.463 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:25.463 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.463 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:25.720 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:25.720 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:25.720 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:25.720 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.720 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.720 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.720 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:25.720 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.720 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:25.720 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.720 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.720 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.978 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:26.236 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:26.236 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:26.236 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.236 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:26.236 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:26.236 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:26.236 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:26.493 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.493 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.493 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:26.493 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:26.493 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.493 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.493 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:26.493 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.493 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.493 13:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:26.493 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.493 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.493 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:26.493 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.493 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.493 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:26.751 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.751 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.751 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:26.751 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.751 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:26.751 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.751 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:26.751 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:26.751 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.751 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.751 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.751 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:27.008 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:27.008 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:27.008 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:27.008 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:27.008 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.008 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.008 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:27.008 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.008 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.008 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:27.008 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:27.283 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.550 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:27.550 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.550 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.550 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:27.550 13:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:27.550 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:27.550 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:27.550 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:27.550 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.550 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.550 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:27.807 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.807 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.807 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:27.807 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:27.807 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.807 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.807 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:27.807 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.807 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.807 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:27.808 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.808 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.808 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:27.808 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.808 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.808 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:27.808 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.808 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.808 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:28.066 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.066 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.066 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.066 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.066 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:28.066 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:28.066 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:28.066 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.324 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:28.583 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.583 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.583 13:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:28.583 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.583 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.583 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.583 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.583 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:28.583 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.583 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.583 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:28.583 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.583 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.583 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:28.583 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:28.842 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:28.842 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:28.842 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.842 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.842 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:28.842 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.842 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.842 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:28.842 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:28.842 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.842 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.842 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:28.842 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:29.134 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:29.134 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.134 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.134 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:29.134 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.134 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.134 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.134 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:29.134 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:29.134 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.134 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.134 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.394 13:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:29.394 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:29.652 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.652 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.652 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:29.652 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.652 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.652 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:29.652 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:29.652 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.652 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:29.652 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.652 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.652 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:29.652 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:29.911 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.911 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.911 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:29.911 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:29.911 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:29.911 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.911 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.911 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:29.911 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.911 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.911 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:29.911 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:30.170 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:30.429 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:30.429 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.429 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.429 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:30.429 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.429 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.429 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.429 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:30.429 13:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:30.429 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:30.429 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.688 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:30.947 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:30.947 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.947 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.947 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:30.947 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.947 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.947 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:30.947 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.947 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:31.206 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:31.206 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:31.206 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:31.206 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:31.206 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.206 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.206 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:31.206 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:31.206 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.206 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.206 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.206 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.206 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:31.465 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.465 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.465 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.465 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.465 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:31.465 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.465 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.465 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.465 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.465 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.465 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.724 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:31.724 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.724 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.983 rmmod nvme_tcp 00:10:31.983 rmmod nvme_fabrics 00:10:31.983 rmmod nvme_keyring 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 62475 ']' 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 62475 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 62475 ']' 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 62475 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62475 00:10:31.983 killing process with pid 62475 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62475' 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 62475 00:10:31.983 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 62475 00:10:32.241 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.241 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.242 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.242 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:10:32.242 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:10:32.242 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.242 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.242 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.242 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:32.242 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:32.242 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:32.242 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:32.501 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:32.501 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:32.501 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:32.501 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:32.501 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:32.501 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:32.501 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:32.501 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:32.501 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:32.501 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:32.501 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:32.501 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.501 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.501 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.501 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:10:32.501 00:10:32.501 real 0m45.048s 00:10:32.501 user 3m40.985s 00:10:32.501 sys 0m13.754s 00:10:32.501 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.501 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.501 ************************************ 00:10:32.501 END TEST nvmf_ns_hotplug_stress 00:10:32.501 ************************************ 00:10:32.501 13:04:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:32.501 13:04:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.501 13:04:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.501 13:04:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:32.501 ************************************ 00:10:32.501 START TEST nvmf_delete_subsystem 00:10:32.501 ************************************ 00:10:32.501 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:32.761 * Looking for test storage... 00:10:32.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:32.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.761 --rc genhtml_branch_coverage=1 00:10:32.761 --rc genhtml_function_coverage=1 00:10:32.761 --rc genhtml_legend=1 00:10:32.761 --rc geninfo_all_blocks=1 00:10:32.761 --rc geninfo_unexecuted_blocks=1 00:10:32.761 00:10:32.761 ' 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:32.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.761 --rc genhtml_branch_coverage=1 00:10:32.761 --rc genhtml_function_coverage=1 00:10:32.761 --rc genhtml_legend=1 00:10:32.761 --rc geninfo_all_blocks=1 00:10:32.761 --rc geninfo_unexecuted_blocks=1 00:10:32.761 00:10:32.761 ' 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:32.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.761 --rc genhtml_branch_coverage=1 00:10:32.761 --rc genhtml_function_coverage=1 00:10:32.761 --rc genhtml_legend=1 00:10:32.761 --rc geninfo_all_blocks=1 00:10:32.761 --rc geninfo_unexecuted_blocks=1 00:10:32.761 00:10:32.761 ' 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:32.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.761 --rc genhtml_branch_coverage=1 00:10:32.761 --rc genhtml_function_coverage=1 00:10:32.761 --rc genhtml_legend=1 00:10:32.761 --rc geninfo_all_blocks=1 00:10:32.761 --rc geninfo_unexecuted_blocks=1 00:10:32.761 00:10:32.761 ' 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.761 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.762 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:32.762 Cannot find device "nvmf_init_br" 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:32.762 Cannot find device "nvmf_init_br2" 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:32.762 Cannot find device "nvmf_tgt_br" 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:32.762 Cannot find device "nvmf_tgt_br2" 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:10:32.762 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:33.071 Cannot find device "nvmf_init_br" 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:33.071 Cannot find device "nvmf_init_br2" 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:33.071 Cannot find device "nvmf_tgt_br" 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:33.071 Cannot find device "nvmf_tgt_br2" 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:33.071 Cannot find device "nvmf_br" 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:33.071 Cannot find device "nvmf_init_if" 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:33.071 Cannot find device "nvmf_init_if2" 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:33.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:33.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:33.071 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:33.072 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:33.072 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:33.072 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:33.072 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:33.072 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:33.072 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:33.353 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:33.353 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:10:33.353 00:10:33.353 --- 10.0.0.3 ping statistics --- 00:10:33.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.353 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:33.353 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:33.353 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:10:33.353 00:10:33.353 --- 10.0.0.4 ping statistics --- 00:10:33.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.353 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:33.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:10:33.353 00:10:33.353 --- 10.0.0.1 ping statistics --- 00:10:33.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.353 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:33.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:10:33.353 00:10:33.353 --- 10.0.0.2 ping statistics --- 00:10:33.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.353 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=65070 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 65070 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 65070 ']' 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.353 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.353 [2024-12-06 13:04:21.815869] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:10:33.353 [2024-12-06 13:04:21.815971] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.353 [2024-12-06 13:04:21.966965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:33.612 [2024-12-06 13:04:22.033965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.612 [2024-12-06 13:04:22.034035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.612 [2024-12-06 13:04:22.034049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.612 [2024-12-06 13:04:22.034059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.612 [2024-12-06 13:04:22.034069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.612 [2024-12-06 13:04:22.038493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.612 [2024-12-06 13:04:22.038513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 [2024-12-06 13:04:22.219848] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 [2024-12-06 13:04:22.236420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 NULL1 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.612 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.870 Delay0 00:10:33.870 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.870 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.870 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.870 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.870 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.870 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=65108 00:10:33.870 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:33.870 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:33.870 [2024-12-06 13:04:22.450728] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:35.769 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.769 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.769 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Write completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 starting I/O failed: -6 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Write completed with error (sct=0, sc=8) 00:10:36.027 starting I/O failed: -6 00:10:36.027 Write completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Write completed with error (sct=0, sc=8) 00:10:36.027 starting I/O failed: -6 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Write completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 starting I/O failed: -6 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Write completed with error (sct=0, sc=8) 00:10:36.027 starting I/O failed: -6 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Write completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 starting I/O failed: -6 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.027 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 [2024-12-06 13:04:24.488206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f562400d4f0 is same with the state(6) to be set 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Write completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 Read completed with error (sct=0, sc=8) 00:10:36.028 starting I/O failed: -6 00:10:36.028 starting I/O failed: -6 00:10:36.028 starting I/O failed: -6 00:10:36.028 starting I/O failed: -6 00:10:36.028 starting I/O failed: -6 00:10:36.028 starting I/O failed: -6 00:10:36.968 [2024-12-06 13:04:25.465048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78faa0 is same with the state(6) to be set 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 [2024-12-06 13:04:25.485868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79dea0 is same with the state(6) to be set 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 [2024-12-06 13:04:25.486590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f562400d040 is same with the state(6) to be set 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Write completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.968 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Write completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 [2024-12-06 13:04:25.486778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f562400d820 is same with the state(6) to be set 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Write completed with error (sct=0, sc=8) 00:10:36.969 Write completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Write completed with error (sct=0, sc=8) 00:10:36.969 Write completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Write completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Write completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Write completed with error (sct=0, sc=8) 00:10:36.969 Write completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Write completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Write completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Read completed with error (sct=0, sc=8) 00:10:36.969 Write completed with error (sct=0, sc=8) 00:10:36.969 [2024-12-06 13:04:25.489161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79aa50 is same with the state(6) to be set 00:10:36.969 Initializing NVMe Controllers 00:10:36.969 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:36.969 Controller IO queue size 128, less than required. 00:10:36.969 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:36.969 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:36.969 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:36.969 Initialization complete. Launching workers. 00:10:36.969 ======================================================== 00:10:36.969 Latency(us) 00:10:36.969 Device Information : IOPS MiB/s Average min max 00:10:36.969 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.71 0.09 901732.41 1415.47 1011446.39 00:10:36.969 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.93 0.08 980293.85 458.70 2001048.91 00:10:36.969 ======================================================== 00:10:36.969 Total : 343.64 0.17 937380.23 458.70 2001048.91 00:10:36.969 00:10:36.969 [2024-12-06 13:04:25.489938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x78faa0 (9): Bad file descriptor 00:10:36.969 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:36.969 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.969 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:36.969 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65108 00:10:36.969 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65108 00:10:37.534 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (65108) - No such process 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 65108 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 65108 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 65108 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:37.534 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:37.535 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.535 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.535 [2024-12-06 13:04:26.011115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=65154 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65154 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:37.535 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:37.792 [2024-12-06 13:04:26.204844] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:38.073 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:38.073 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65154 00:10:38.073 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:38.638 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:38.638 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65154 00:10:38.638 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:39.203 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:39.203 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65154 00:10:39.203 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:39.461 13:04:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:39.461 13:04:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65154 00:10:39.461 13:04:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:40.026 13:04:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:40.026 13:04:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65154 00:10:40.026 13:04:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:40.591 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:40.591 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65154 00:10:40.591 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:40.849 Initializing NVMe Controllers 00:10:40.849 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:40.849 Controller IO queue size 128, less than required. 00:10:40.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:40.849 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:40.849 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:40.849 Initialization complete. Launching workers. 00:10:40.849 ======================================================== 00:10:40.849 Latency(us) 00:10:40.849 Device Information : IOPS MiB/s Average min max 00:10:40.849 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005362.19 1000230.23 1013779.14 00:10:40.849 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003109.67 1000136.24 1010920.26 00:10:40.849 ======================================================== 00:10:40.849 Total : 256.00 0.12 1004235.93 1000136.24 1013779.14 00:10:40.849 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65154 00:10:41.107 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (65154) - No such process 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 65154 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.107 rmmod nvme_tcp 00:10:41.107 rmmod nvme_fabrics 00:10:41.107 rmmod nvme_keyring 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 65070 ']' 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 65070 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 65070 ']' 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 65070 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65070 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.107 killing process with pid 65070 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65070' 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 65070 00:10:41.107 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 65070 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:41.366 13:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:10:41.626 ************************************ 00:10:41.626 END TEST nvmf_delete_subsystem 00:10:41.626 ************************************ 00:10:41.626 00:10:41.626 real 0m8.982s 00:10:41.626 user 0m27.488s 00:10:41.626 sys 0m1.497s 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.626 ************************************ 00:10:41.626 START TEST nvmf_host_management 00:10:41.626 ************************************ 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:41.626 * Looking for test storage... 00:10:41.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:10:41.626 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:41.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.887 --rc genhtml_branch_coverage=1 00:10:41.887 --rc genhtml_function_coverage=1 00:10:41.887 --rc genhtml_legend=1 00:10:41.887 --rc geninfo_all_blocks=1 00:10:41.887 --rc geninfo_unexecuted_blocks=1 00:10:41.887 00:10:41.887 ' 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:41.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.887 --rc genhtml_branch_coverage=1 00:10:41.887 --rc genhtml_function_coverage=1 00:10:41.887 --rc genhtml_legend=1 00:10:41.887 --rc geninfo_all_blocks=1 00:10:41.887 --rc geninfo_unexecuted_blocks=1 00:10:41.887 00:10:41.887 ' 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:41.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.887 --rc genhtml_branch_coverage=1 00:10:41.887 --rc genhtml_function_coverage=1 00:10:41.887 --rc genhtml_legend=1 00:10:41.887 --rc geninfo_all_blocks=1 00:10:41.887 --rc geninfo_unexecuted_blocks=1 00:10:41.887 00:10:41.887 ' 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:41.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.887 --rc genhtml_branch_coverage=1 00:10:41.887 --rc genhtml_function_coverage=1 00:10:41.887 --rc genhtml_legend=1 00:10:41.887 --rc geninfo_all_blocks=1 00:10:41.887 --rc geninfo_unexecuted_blocks=1 00:10:41.887 00:10:41.887 ' 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.887 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.888 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:41.888 Cannot find device "nvmf_init_br" 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:41.888 Cannot find device "nvmf_init_br2" 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:41.888 Cannot find device "nvmf_tgt_br" 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.888 Cannot find device "nvmf_tgt_br2" 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:41.888 Cannot find device "nvmf_init_br" 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:41.888 Cannot find device "nvmf_init_br2" 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:41.888 Cannot find device "nvmf_tgt_br" 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:41.888 Cannot find device "nvmf_tgt_br2" 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:41.888 Cannot find device "nvmf_br" 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:41.888 Cannot find device "nvmf_init_if" 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:41.888 Cannot find device "nvmf_init_if2" 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.888 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:10:41.889 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:41.889 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:41.889 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:42.147 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:42.148 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:42.148 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:10:42.148 00:10:42.148 --- 10.0.0.3 ping statistics --- 00:10:42.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.148 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:42.148 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:42.148 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:10:42.148 00:10:42.148 --- 10.0.0.4 ping statistics --- 00:10:42.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.148 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:42.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:42.148 00:10:42.148 --- 10.0.0.1 ping statistics --- 00:10:42.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.148 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:42.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:10:42.148 00:10:42.148 --- 10.0.0.2 ping statistics --- 00:10:42.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.148 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:42.148 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:42.406 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:42.406 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:42.407 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:42.407 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:42.407 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.407 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:42.407 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=65440 00:10:42.407 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 65440 00:10:42.407 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:42.407 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65440 ']' 00:10:42.407 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.407 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.407 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.407 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.407 13:04:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:42.407 [2024-12-06 13:04:30.860442] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:10:42.407 [2024-12-06 13:04:30.860563] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.407 [2024-12-06 13:04:31.019136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.665 [2024-12-06 13:04:31.090474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.665 [2024-12-06 13:04:31.090530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.665 [2024-12-06 13:04:31.090544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.666 [2024-12-06 13:04:31.090555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.666 [2024-12-06 13:04:31.090564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.666 [2024-12-06 13:04:31.091809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.666 [2024-12-06 13:04:31.091867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.666 [2024-12-06 13:04:31.091999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:42.666 [2024-12-06 13:04:31.092004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:42.666 [2024-12-06 13:04:31.276463] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.666 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:42.924 Malloc0 00:10:42.924 [2024-12-06 13:04:31.350868] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:42.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65499 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65499 /var/tmp/bdevperf.sock 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65499 ']' 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:42.924 { 00:10:42.924 "params": { 00:10:42.924 "name": "Nvme$subsystem", 00:10:42.924 "trtype": "$TEST_TRANSPORT", 00:10:42.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.924 "adrfam": "ipv4", 00:10:42.924 "trsvcid": "$NVMF_PORT", 00:10:42.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.924 "hdgst": ${hdgst:-false}, 00:10:42.924 "ddgst": ${ddgst:-false} 00:10:42.924 }, 00:10:42.924 "method": "bdev_nvme_attach_controller" 00:10:42.924 } 00:10:42.924 EOF 00:10:42.924 )") 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:42.924 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:42.924 "params": { 00:10:42.924 "name": "Nvme0", 00:10:42.924 "trtype": "tcp", 00:10:42.924 "traddr": "10.0.0.3", 00:10:42.924 "adrfam": "ipv4", 00:10:42.924 "trsvcid": "4420", 00:10:42.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:42.924 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:42.924 "hdgst": false, 00:10:42.924 "ddgst": false 00:10:42.924 }, 00:10:42.924 "method": "bdev_nvme_attach_controller" 00:10:42.924 }' 00:10:42.924 [2024-12-06 13:04:31.449577] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:10:42.924 [2024-12-06 13:04:31.449669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65499 ] 00:10:43.184 [2024-12-06 13:04:31.602985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.184 [2024-12-06 13:04:31.671012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.444 Running I/O for 10 seconds... 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:43.444 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.444 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:10:43.444 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:10:43.444 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:43.704 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:43.704 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:43.704 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:43.705 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:43.705 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.705 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:43.705 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.705 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:10:43.705 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:10:43.705 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:43.705 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:43.705 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:43.705 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:43.705 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.705 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:43.705 [2024-12-06 13:04:32.311675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.311992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.312000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.312018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.312027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee20 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.315765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:43.705 [2024-12-06 13:04:32.315812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.315827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:43.705 [2024-12-06 13:04:32.315837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.315848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:43.705 [2024-12-06 13:04:32.315857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.315866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:43.705 [2024-12-06 13:04:32.315876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.315885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492130 is same with the state(6) to be set 00:10:43.705 [2024-12-06 13:04:32.316274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.705 [2024-12-06 13:04:32.316302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.316325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.705 [2024-12-06 13:04:32.316336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.316348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.705 [2024-12-06 13:04:32.316358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.316369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.705 [2024-12-06 13:04:32.316378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.316389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.705 [2024-12-06 13:04:32.316398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.316409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.705 [2024-12-06 13:04:32.316418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.316429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.705 [2024-12-06 13:04:32.316438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.316461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.705 [2024-12-06 13:04:32.316472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.316483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.705 [2024-12-06 13:04:32.316492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.316503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.705 [2024-12-06 13:04:32.316512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.705 [2024-12-06 13:04:32.316522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.316980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.316991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.706 [2024-12-06 13:04:32.317316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.706 [2024-12-06 13:04:32.317326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:43.707 [2024-12-06 13:04:32.317619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.707 [2024-12-06 13:04:32.317648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:10:43.707 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.707 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:43.707 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.707 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:43.707 [2024-12-06 13:04:32.318840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:43.707 task offset: 81920 on job bdev=Nvme0n1 fails 00:10:43.707 00:10:43.707 Latency(us) 00:10:43.707 [2024-12-06T13:04:32.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.707 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:43.707 Job: Nvme0n1 ended in about 0.45 seconds with error 00:10:43.707 Verification LBA range: start 0x0 length 0x400 00:10:43.707 Nvme0n1 : 0.45 1416.47 88.53 141.65 0.00 39429.89 2308.65 44802.79 00:10:43.707 [2024-12-06T13:04:32.347Z] =================================================================================================================== 00:10:43.707 [2024-12-06T13:04:32.347Z] Total : 1416.47 88.53 141.65 0.00 39429.89 2308.65 44802.79 00:10:43.707 [2024-12-06 13:04:32.320907] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:43.707 [2024-12-06 13:04:32.320940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2492130 (9): Bad file descriptor 00:10:43.707 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.707 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:43.707 [2024-12-06 13:04:32.333701] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:10:44.699 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65499 00:10:44.699 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65499) - No such process 00:10:44.699 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:44.699 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:44.699 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:44.699 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:44.699 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:44.699 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:44.956 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:44.956 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:44.956 { 00:10:44.956 "params": { 00:10:44.956 "name": "Nvme$subsystem", 00:10:44.956 "trtype": "$TEST_TRANSPORT", 00:10:44.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:44.956 "adrfam": "ipv4", 00:10:44.956 "trsvcid": "$NVMF_PORT", 00:10:44.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:44.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:44.956 "hdgst": ${hdgst:-false}, 00:10:44.956 "ddgst": ${ddgst:-false} 00:10:44.956 }, 00:10:44.956 "method": "bdev_nvme_attach_controller" 00:10:44.956 } 00:10:44.956 EOF 00:10:44.956 )") 00:10:44.956 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:44.956 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:44.956 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:44.956 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:44.956 "params": { 00:10:44.956 "name": "Nvme0", 00:10:44.956 "trtype": "tcp", 00:10:44.956 "traddr": "10.0.0.3", 00:10:44.956 "adrfam": "ipv4", 00:10:44.956 "trsvcid": "4420", 00:10:44.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:44.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:44.956 "hdgst": false, 00:10:44.956 "ddgst": false 00:10:44.956 }, 00:10:44.956 "method": "bdev_nvme_attach_controller" 00:10:44.956 }' 00:10:44.956 [2024-12-06 13:04:33.396527] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:10:44.956 [2024-12-06 13:04:33.396624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65545 ] 00:10:44.956 [2024-12-06 13:04:33.545634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.213 [2024-12-06 13:04:33.599016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.213 Running I/O for 1 seconds... 00:10:46.586 1536.00 IOPS, 96.00 MiB/s 00:10:46.586 Latency(us) 00:10:46.586 [2024-12-06T13:04:35.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.586 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:46.586 Verification LBA range: start 0x0 length 0x400 00:10:46.586 Nvme0n1 : 1.03 1559.83 97.49 0.00 0.00 40186.14 5213.09 37891.72 00:10:46.586 [2024-12-06T13:04:35.226Z] =================================================================================================================== 00:10:46.586 [2024-12-06T13:04:35.227Z] Total : 1559.83 97.49 0.00 0.00 40186.14 5213.09 37891.72 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.587 rmmod nvme_tcp 00:10:46.587 rmmod nvme_fabrics 00:10:46.587 rmmod nvme_keyring 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:46.587 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:46.588 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 65440 ']' 00:10:46.588 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 65440 00:10:46.588 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 65440 ']' 00:10:46.588 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 65440 00:10:46.588 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:10:46.588 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.588 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65440 00:10:46.588 killing process with pid 65440 00:10:46.588 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:46.588 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:46.588 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65440' 00:10:46.588 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 65440 00:10:46.588 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 65440 00:10:46.847 [2024-12-06 13:04:35.335855] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:46.847 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:47.105 00:10:47.105 real 0m5.459s 00:10:47.105 user 0m19.526s 00:10:47.105 sys 0m1.374s 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.105 ************************************ 00:10:47.105 END TEST nvmf_host_management 00:10:47.105 ************************************ 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:47.105 ************************************ 00:10:47.105 START TEST nvmf_lvol 00:10:47.105 ************************************ 00:10:47.105 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:47.105 * Looking for test storage... 00:10:47.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:47.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.363 --rc genhtml_branch_coverage=1 00:10:47.363 --rc genhtml_function_coverage=1 00:10:47.363 --rc genhtml_legend=1 00:10:47.363 --rc geninfo_all_blocks=1 00:10:47.363 --rc geninfo_unexecuted_blocks=1 00:10:47.363 00:10:47.363 ' 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:47.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.363 --rc genhtml_branch_coverage=1 00:10:47.363 --rc genhtml_function_coverage=1 00:10:47.363 --rc genhtml_legend=1 00:10:47.363 --rc geninfo_all_blocks=1 00:10:47.363 --rc geninfo_unexecuted_blocks=1 00:10:47.363 00:10:47.363 ' 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:47.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.363 --rc genhtml_branch_coverage=1 00:10:47.363 --rc genhtml_function_coverage=1 00:10:47.363 --rc genhtml_legend=1 00:10:47.363 --rc geninfo_all_blocks=1 00:10:47.363 --rc geninfo_unexecuted_blocks=1 00:10:47.363 00:10:47.363 ' 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:47.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.363 --rc genhtml_branch_coverage=1 00:10:47.363 --rc genhtml_function_coverage=1 00:10:47.363 --rc genhtml_legend=1 00:10:47.363 --rc geninfo_all_blocks=1 00:10:47.363 --rc geninfo_unexecuted_blocks=1 00:10:47.363 00:10:47.363 ' 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.363 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:47.363 Cannot find device "nvmf_init_br" 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:47.363 Cannot find device "nvmf_init_br2" 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:47.363 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:47.363 Cannot find device "nvmf_tgt_br" 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:47.364 Cannot find device "nvmf_tgt_br2" 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:47.364 Cannot find device "nvmf_init_br" 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:47.364 Cannot find device "nvmf_init_br2" 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:47.364 Cannot find device "nvmf_tgt_br" 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:47.364 Cannot find device "nvmf_tgt_br2" 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:47.364 Cannot find device "nvmf_br" 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:47.364 Cannot find device "nvmf_init_if" 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:10:47.364 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:47.622 Cannot find device "nvmf_init_if2" 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:47.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:47.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:47.622 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:47.622 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:10:47.622 00:10:47.622 --- 10.0.0.3 ping statistics --- 00:10:47.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.622 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:47.622 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:47.622 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:10:47.622 00:10:47.622 --- 10.0.0.4 ping statistics --- 00:10:47.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.622 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:47.622 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:47.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:10:47.622 00:10:47.622 --- 10.0.0.1 ping statistics --- 00:10:47.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.623 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:47.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:10:47.623 00:10:47.623 --- 10.0.0.2 ping statistics --- 00:10:47.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.623 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65813 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65813 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65813 ']' 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.623 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:47.880 [2024-12-06 13:04:36.312567] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:10:47.880 [2024-12-06 13:04:36.312675] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.880 [2024-12-06 13:04:36.465627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:48.137 [2024-12-06 13:04:36.533984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.137 [2024-12-06 13:04:36.534283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.137 [2024-12-06 13:04:36.534383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.137 [2024-12-06 13:04:36.534517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.137 [2024-12-06 13:04:36.534615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.137 [2024-12-06 13:04:36.535921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.137 [2024-12-06 13:04:36.535995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.137 [2024-12-06 13:04:36.535998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.137 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.137 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:10:48.137 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:48.137 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:48.137 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:48.137 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.137 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:48.394 [2024-12-06 13:04:36.957092] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.394 13:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.958 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:48.958 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.215 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:49.215 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:49.473 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:49.730 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2176ff76-45ab-4669-939d-dba5a0c85837 00:10:49.730 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2176ff76-45ab-4669-939d-dba5a0c85837 lvol 20 00:10:49.988 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d34c6a34-4983-454f-acf3-d330e4f1e994 00:10:49.988 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:50.246 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d34c6a34-4983-454f-acf3-d330e4f1e994 00:10:50.503 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:50.760 [2024-12-06 13:04:39.380745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:51.017 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:51.273 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:51.273 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65947 00:10:51.273 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:52.206 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot d34c6a34-4983-454f-acf3-d330e4f1e994 MY_SNAPSHOT 00:10:52.484 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5b45d4ec-88a1-4e17-94ea-2965a7ff7196 00:10:52.484 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize d34c6a34-4983-454f-acf3-d330e4f1e994 30 00:10:53.049 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5b45d4ec-88a1-4e17-94ea-2965a7ff7196 MY_CLONE 00:10:53.306 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ceb990a2-e08f-4046-ad61-f5deea9b68ac 00:10:53.306 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate ceb990a2-e08f-4046-ad61-f5deea9b68ac 00:10:53.871 13:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65947 00:11:01.999 Initializing NVMe Controllers 00:11:01.999 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:11:01.999 Controller IO queue size 128, less than required. 00:11:01.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:01.999 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:01.999 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:01.999 Initialization complete. Launching workers. 00:11:01.999 ======================================================== 00:11:01.999 Latency(us) 00:11:01.999 Device Information : IOPS MiB/s Average min max 00:11:01.999 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10080.10 39.38 12705.23 1178.46 90027.06 00:11:01.999 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10020.30 39.14 12780.73 2928.67 67673.43 00:11:01.999 ======================================================== 00:11:01.999 Total : 20100.40 78.52 12742.87 1178.46 90027.06 00:11:01.999 00:11:01.999 13:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:01.999 13:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d34c6a34-4983-454f-acf3-d330e4f1e994 00:11:02.257 13:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2176ff76-45ab-4669-939d-dba5a0c85837 00:11:02.515 13:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:02.515 13:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:02.515 13:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:02.515 13:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.515 13:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:02.515 13:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:02.515 13:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:02.516 13:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.516 13:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:02.516 rmmod nvme_tcp 00:11:02.516 rmmod nvme_fabrics 00:11:02.516 rmmod nvme_keyring 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65813 ']' 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65813 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65813 ']' 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65813 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65813 00:11:02.516 killing process with pid 65813 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65813' 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65813 00:11:02.516 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65813 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:02.773 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:03.031 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:03.031 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:03.031 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.031 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.031 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:03.031 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.031 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.031 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.032 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:11:03.032 00:11:03.032 real 0m15.916s 00:11:03.032 user 1m5.916s 00:11:03.032 sys 0m3.876s 00:11:03.032 ************************************ 00:11:03.032 END TEST nvmf_lvol 00:11:03.032 ************************************ 00:11:03.032 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.032 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:03.032 13:04:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:03.032 13:04:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.032 13:04:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.032 13:04:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.032 ************************************ 00:11:03.032 START TEST nvmf_lvs_grow 00:11:03.032 ************************************ 00:11:03.032 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:03.290 * Looking for test storage... 00:11:03.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.290 --rc genhtml_branch_coverage=1 00:11:03.290 --rc genhtml_function_coverage=1 00:11:03.290 --rc genhtml_legend=1 00:11:03.290 --rc geninfo_all_blocks=1 00:11:03.290 --rc geninfo_unexecuted_blocks=1 00:11:03.290 00:11:03.290 ' 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.290 --rc genhtml_branch_coverage=1 00:11:03.290 --rc genhtml_function_coverage=1 00:11:03.290 --rc genhtml_legend=1 00:11:03.290 --rc geninfo_all_blocks=1 00:11:03.290 --rc geninfo_unexecuted_blocks=1 00:11:03.290 00:11:03.290 ' 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.290 --rc genhtml_branch_coverage=1 00:11:03.290 --rc genhtml_function_coverage=1 00:11:03.290 --rc genhtml_legend=1 00:11:03.290 --rc geninfo_all_blocks=1 00:11:03.290 --rc geninfo_unexecuted_blocks=1 00:11:03.290 00:11:03.290 ' 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.290 --rc genhtml_branch_coverage=1 00:11:03.290 --rc genhtml_function_coverage=1 00:11:03.290 --rc genhtml_legend=1 00:11:03.290 --rc geninfo_all_blocks=1 00:11:03.290 --rc geninfo_unexecuted_blocks=1 00:11:03.290 00:11:03.290 ' 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.290 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.291 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:03.291 Cannot find device "nvmf_init_br" 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:03.291 Cannot find device "nvmf_init_br2" 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:03.291 Cannot find device "nvmf_tgt_br" 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.291 Cannot find device "nvmf_tgt_br2" 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:03.291 Cannot find device "nvmf_init_br" 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:11:03.291 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:03.291 Cannot find device "nvmf_init_br2" 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:03.549 Cannot find device "nvmf_tgt_br" 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:03.549 Cannot find device "nvmf_tgt_br2" 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:03.549 Cannot find device "nvmf_br" 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:03.549 Cannot find device "nvmf_init_if" 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:03.549 Cannot find device "nvmf_init_if2" 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:03.549 13:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.549 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:03.808 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.808 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:11:03.808 00:11:03.808 --- 10.0.0.3 ping statistics --- 00:11:03.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.808 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:03.808 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:03.808 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.145 ms 00:11:03.808 00:11:03.808 --- 10.0.0.4 ping statistics --- 00:11:03.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.808 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:03.808 00:11:03.808 --- 10.0.0.1 ping statistics --- 00:11:03.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.808 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:03.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:11:03.808 00:11:03.808 --- 10.0.0.2 ping statistics --- 00:11:03.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.808 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=66367 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 66367 00:11:03.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 66367 ']' 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.808 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:03.808 [2024-12-06 13:04:52.317968] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:03.808 [2024-12-06 13:04:52.318288] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.099 [2024-12-06 13:04:52.467559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.099 [2024-12-06 13:04:52.521585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.099 [2024-12-06 13:04:52.521827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.099 [2024-12-06 13:04:52.521934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.099 [2024-12-06 13:04:52.521948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.099 [2024-12-06 13:04:52.521956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.099 [2024-12-06 13:04:52.522358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.099 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.099 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:11:04.099 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:04.099 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:04.099 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:04.099 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.099 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:04.357 [2024-12-06 13:04:52.921508] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:04.357 ************************************ 00:11:04.357 START TEST lvs_grow_clean 00:11:04.357 ************************************ 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:04.357 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:04.923 13:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:04.923 13:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:05.180 13:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=769640ab-a369-421b-abc2-9605388685a2 00:11:05.180 13:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 769640ab-a369-421b-abc2-9605388685a2 00:11:05.180 13:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:05.437 13:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:05.437 13:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:05.437 13:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 769640ab-a369-421b-abc2-9605388685a2 lvol 150 00:11:05.695 13:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=435d90f6-965b-473f-9cbe-5de18a96c6f8 00:11:05.695 13:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:05.695 13:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:05.952 [2024-12-06 13:04:54.482442] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:05.952 [2024-12-06 13:04:54.482547] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:05.952 true 00:11:05.952 13:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 769640ab-a369-421b-abc2-9605388685a2 00:11:05.952 13:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:06.210 13:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:06.210 13:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:06.468 13:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 435d90f6-965b-473f-9cbe-5de18a96c6f8 00:11:07.034 13:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:11:07.034 [2024-12-06 13:04:55.647135] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:07.292 13:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:07.550 13:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66526 00:11:07.550 13:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:07.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:07.550 13:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:07.550 13:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66526 /var/tmp/bdevperf.sock 00:11:07.550 13:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 66526 ']' 00:11:07.550 13:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:07.550 13:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.550 13:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:07.550 13:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.550 13:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:07.550 [2024-12-06 13:04:55.993223] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:07.550 [2024-12-06 13:04:55.993326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66526 ] 00:11:07.550 [2024-12-06 13:04:56.148412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.835 [2024-12-06 13:04:56.218820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.402 13:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.402 13:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:11:08.402 13:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:08.659 Nvme0n1 00:11:08.917 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:08.917 [ 00:11:08.917 { 00:11:08.917 "aliases": [ 00:11:08.917 "435d90f6-965b-473f-9cbe-5de18a96c6f8" 00:11:08.917 ], 00:11:08.917 "assigned_rate_limits": { 00:11:08.917 "r_mbytes_per_sec": 0, 00:11:08.917 "rw_ios_per_sec": 0, 00:11:08.917 "rw_mbytes_per_sec": 0, 00:11:08.917 "w_mbytes_per_sec": 0 00:11:08.917 }, 00:11:08.917 "block_size": 4096, 00:11:08.917 "claimed": false, 00:11:08.917 "driver_specific": { 00:11:08.917 "mp_policy": "active_passive", 00:11:08.917 "nvme": [ 00:11:08.917 { 00:11:08.917 "ctrlr_data": { 00:11:08.917 "ana_reporting": false, 00:11:08.917 "cntlid": 1, 00:11:08.917 "firmware_revision": "25.01", 00:11:08.917 "model_number": "SPDK bdev Controller", 00:11:08.917 "multi_ctrlr": true, 00:11:08.917 "oacs": { 00:11:08.917 "firmware": 0, 00:11:08.917 "format": 0, 00:11:08.917 "ns_manage": 0, 00:11:08.917 "security": 0 00:11:08.917 }, 00:11:08.917 "serial_number": "SPDK0", 00:11:08.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:08.917 "vendor_id": "0x8086" 00:11:08.917 }, 00:11:08.917 "ns_data": { 00:11:08.917 "can_share": true, 00:11:08.917 "id": 1 00:11:08.917 }, 00:11:08.917 "trid": { 00:11:08.917 "adrfam": "IPv4", 00:11:08.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:08.917 "traddr": "10.0.0.3", 00:11:08.917 "trsvcid": "4420", 00:11:08.917 "trtype": "TCP" 00:11:08.917 }, 00:11:08.917 "vs": { 00:11:08.917 "nvme_version": "1.3" 00:11:08.917 } 00:11:08.917 } 00:11:08.917 ] 00:11:08.917 }, 00:11:08.917 "memory_domains": [ 00:11:08.917 { 00:11:08.917 "dma_device_id": "system", 00:11:08.917 "dma_device_type": 1 00:11:08.917 } 00:11:08.917 ], 00:11:08.917 "name": "Nvme0n1", 00:11:08.917 "num_blocks": 38912, 00:11:08.917 "numa_id": -1, 00:11:08.917 "product_name": "NVMe disk", 00:11:08.917 "supported_io_types": { 00:11:08.917 "abort": true, 00:11:08.917 "compare": true, 00:11:08.917 "compare_and_write": true, 00:11:08.917 "copy": true, 00:11:08.917 "flush": true, 00:11:08.917 "get_zone_info": false, 00:11:08.917 "nvme_admin": true, 00:11:08.917 "nvme_io": true, 00:11:08.917 "nvme_io_md": false, 00:11:08.917 "nvme_iov_md": false, 00:11:08.917 "read": true, 00:11:08.917 "reset": true, 00:11:08.917 "seek_data": false, 00:11:08.917 "seek_hole": false, 00:11:08.917 "unmap": true, 00:11:08.917 "write": true, 00:11:08.917 "write_zeroes": true, 00:11:08.917 "zcopy": false, 00:11:08.917 "zone_append": false, 00:11:08.917 "zone_management": false 00:11:08.917 }, 00:11:08.917 "uuid": "435d90f6-965b-473f-9cbe-5de18a96c6f8", 00:11:08.917 "zoned": false 00:11:08.917 } 00:11:08.917 ] 00:11:09.175 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:09.175 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66574 00:11:09.175 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:09.175 Running I/O for 10 seconds... 00:11:10.108 Latency(us) 00:11:10.108 [2024-12-06T13:04:58.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:10.108 Nvme0n1 : 1.00 7070.00 27.62 0.00 0.00 0.00 0.00 0.00 00:11:10.108 [2024-12-06T13:04:58.748Z] =================================================================================================================== 00:11:10.108 [2024-12-06T13:04:58.749Z] Total : 7070.00 27.62 0.00 0.00 0.00 0.00 0.00 00:11:10.109 00:11:11.099 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 769640ab-a369-421b-abc2-9605388685a2 00:11:11.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:11.099 Nvme0n1 : 2.00 6929.50 27.07 0.00 0.00 0.00 0.00 0.00 00:11:11.099 [2024-12-06T13:04:59.739Z] =================================================================================================================== 00:11:11.099 [2024-12-06T13:04:59.739Z] Total : 6929.50 27.07 0.00 0.00 0.00 0.00 0.00 00:11:11.099 00:11:11.357 true 00:11:11.357 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 769640ab-a369-421b-abc2-9605388685a2 00:11:11.357 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:11.616 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:11.616 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:11.616 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66574 00:11:12.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:12.182 Nvme0n1 : 3.00 6875.33 26.86 0.00 0.00 0.00 0.00 0.00 00:11:12.182 [2024-12-06T13:05:00.822Z] =================================================================================================================== 00:11:12.182 [2024-12-06T13:05:00.822Z] Total : 6875.33 26.86 0.00 0.00 0.00 0.00 0.00 00:11:12.182 00:11:13.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.116 Nvme0n1 : 4.00 6947.50 27.14 0.00 0.00 0.00 0.00 0.00 00:11:13.116 [2024-12-06T13:05:01.756Z] =================================================================================================================== 00:11:13.116 [2024-12-06T13:05:01.756Z] Total : 6947.50 27.14 0.00 0.00 0.00 0.00 0.00 00:11:13.116 00:11:14.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:14.080 Nvme0n1 : 5.00 6946.20 27.13 0.00 0.00 0.00 0.00 0.00 00:11:14.080 [2024-12-06T13:05:02.720Z] =================================================================================================================== 00:11:14.080 [2024-12-06T13:05:02.720Z] Total : 6946.20 27.13 0.00 0.00 0.00 0.00 0.00 00:11:14.080 00:11:15.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:15.540 Nvme0n1 : 6.00 6917.33 27.02 0.00 0.00 0.00 0.00 0.00 00:11:15.540 [2024-12-06T13:05:04.180Z] =================================================================================================================== 00:11:15.540 [2024-12-06T13:05:04.180Z] Total : 6917.33 27.02 0.00 0.00 0.00 0.00 0.00 00:11:15.540 00:11:16.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.104 Nvme0n1 : 7.00 6896.57 26.94 0.00 0.00 0.00 0.00 0.00 00:11:16.104 [2024-12-06T13:05:04.744Z] =================================================================================================================== 00:11:16.104 [2024-12-06T13:05:04.744Z] Total : 6896.57 26.94 0.00 0.00 0.00 0.00 0.00 00:11:16.104 00:11:17.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.036 Nvme0n1 : 8.00 6888.50 26.91 0.00 0.00 0.00 0.00 0.00 00:11:17.036 [2024-12-06T13:05:05.676Z] =================================================================================================================== 00:11:17.036 [2024-12-06T13:05:05.676Z] Total : 6888.50 26.91 0.00 0.00 0.00 0.00 0.00 00:11:17.036 00:11:18.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.406 Nvme0n1 : 9.00 6834.22 26.70 0.00 0.00 0.00 0.00 0.00 00:11:18.406 [2024-12-06T13:05:07.046Z] =================================================================================================================== 00:11:18.406 [2024-12-06T13:05:07.046Z] Total : 6834.22 26.70 0.00 0.00 0.00 0.00 0.00 00:11:18.406 00:11:19.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.338 Nvme0n1 : 10.00 6822.10 26.65 0.00 0.00 0.00 0.00 0.00 00:11:19.338 [2024-12-06T13:05:07.978Z] =================================================================================================================== 00:11:19.338 [2024-12-06T13:05:07.978Z] Total : 6822.10 26.65 0.00 0.00 0.00 0.00 0.00 00:11:19.338 00:11:19.338 00:11:19.338 Latency(us) 00:11:19.338 [2024-12-06T13:05:07.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.338 Nvme0n1 : 10.01 6831.04 26.68 0.00 0.00 18730.61 8281.37 75306.82 00:11:19.338 [2024-12-06T13:05:07.978Z] =================================================================================================================== 00:11:19.338 [2024-12-06T13:05:07.978Z] Total : 6831.04 26.68 0.00 0.00 18730.61 8281.37 75306.82 00:11:19.338 { 00:11:19.338 "results": [ 00:11:19.338 { 00:11:19.338 "job": "Nvme0n1", 00:11:19.338 "core_mask": "0x2", 00:11:19.338 "workload": "randwrite", 00:11:19.338 "status": "finished", 00:11:19.338 "queue_depth": 128, 00:11:19.338 "io_size": 4096, 00:11:19.338 "runtime": 10.005645, 00:11:19.338 "iops": 6831.043875732149, 00:11:19.338 "mibps": 26.683765139578707, 00:11:19.338 "io_failed": 0, 00:11:19.338 "io_timeout": 0, 00:11:19.338 "avg_latency_us": 18730.611894089026, 00:11:19.338 "min_latency_us": 8281.367272727273, 00:11:19.338 "max_latency_us": 75306.82181818182 00:11:19.338 } 00:11:19.338 ], 00:11:19.338 "core_count": 1 00:11:19.338 } 00:11:19.338 13:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66526 00:11:19.338 13:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 66526 ']' 00:11:19.338 13:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 66526 00:11:19.338 13:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:11:19.339 13:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.339 13:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66526 00:11:19.339 13:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:19.339 killing process with pid 66526 00:11:19.339 Received shutdown signal, test time was about 10.000000 seconds 00:11:19.339 00:11:19.339 Latency(us) 00:11:19.339 [2024-12-06T13:05:07.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.339 [2024-12-06T13:05:07.979Z] =================================================================================================================== 00:11:19.339 [2024-12-06T13:05:07.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:19.339 13:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:19.339 13:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66526' 00:11:19.339 13:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 66526 00:11:19.339 13:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 66526 00:11:19.339 13:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:19.903 13:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:20.163 13:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:20.163 13:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 769640ab-a369-421b-abc2-9605388685a2 00:11:20.453 13:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:20.453 13:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:20.453 13:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:20.711 [2024-12-06 13:05:09.204115] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:20.711 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 769640ab-a369-421b-abc2-9605388685a2 00:11:20.711 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:11:20.711 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 769640ab-a369-421b-abc2-9605388685a2 00:11:20.711 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.711 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.711 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.711 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.711 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.711 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.711 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.711 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:20.711 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 769640ab-a369-421b-abc2-9605388685a2 00:11:20.969 2024/12/06 13:05:09 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:769640ab-a369-421b-abc2-9605388685a2], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:11:20.969 request: 00:11:20.969 { 00:11:20.969 "method": "bdev_lvol_get_lvstores", 00:11:20.969 "params": { 00:11:20.969 "uuid": "769640ab-a369-421b-abc2-9605388685a2" 00:11:20.969 } 00:11:20.969 } 00:11:20.969 Got JSON-RPC error response 00:11:20.969 GoRPCClient: error on JSON-RPC call 00:11:20.969 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:11:20.969 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:20.969 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:20.969 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:20.969 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:21.227 aio_bdev 00:11:21.486 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 435d90f6-965b-473f-9cbe-5de18a96c6f8 00:11:21.486 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=435d90f6-965b-473f-9cbe-5de18a96c6f8 00:11:21.486 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.486 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:11:21.486 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.486 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.486 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:21.743 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 435d90f6-965b-473f-9cbe-5de18a96c6f8 -t 2000 00:11:22.002 [ 00:11:22.002 { 00:11:22.002 "aliases": [ 00:11:22.002 "lvs/lvol" 00:11:22.002 ], 00:11:22.002 "assigned_rate_limits": { 00:11:22.002 "r_mbytes_per_sec": 0, 00:11:22.002 "rw_ios_per_sec": 0, 00:11:22.002 "rw_mbytes_per_sec": 0, 00:11:22.002 "w_mbytes_per_sec": 0 00:11:22.002 }, 00:11:22.002 "block_size": 4096, 00:11:22.002 "claimed": false, 00:11:22.002 "driver_specific": { 00:11:22.002 "lvol": { 00:11:22.002 "base_bdev": "aio_bdev", 00:11:22.002 "clone": false, 00:11:22.002 "esnap_clone": false, 00:11:22.002 "lvol_store_uuid": "769640ab-a369-421b-abc2-9605388685a2", 00:11:22.002 "num_allocated_clusters": 38, 00:11:22.002 "snapshot": false, 00:11:22.002 "thin_provision": false 00:11:22.002 } 00:11:22.002 }, 00:11:22.002 "name": "435d90f6-965b-473f-9cbe-5de18a96c6f8", 00:11:22.002 "num_blocks": 38912, 00:11:22.002 "product_name": "Logical Volume", 00:11:22.002 "supported_io_types": { 00:11:22.002 "abort": false, 00:11:22.002 "compare": false, 00:11:22.002 "compare_and_write": false, 00:11:22.002 "copy": false, 00:11:22.002 "flush": false, 00:11:22.002 "get_zone_info": false, 00:11:22.002 "nvme_admin": false, 00:11:22.002 "nvme_io": false, 00:11:22.002 "nvme_io_md": false, 00:11:22.002 "nvme_iov_md": false, 00:11:22.002 "read": true, 00:11:22.002 "reset": true, 00:11:22.002 "seek_data": true, 00:11:22.002 "seek_hole": true, 00:11:22.002 "unmap": true, 00:11:22.002 "write": true, 00:11:22.002 "write_zeroes": true, 00:11:22.002 "zcopy": false, 00:11:22.002 "zone_append": false, 00:11:22.002 "zone_management": false 00:11:22.002 }, 00:11:22.002 "uuid": "435d90f6-965b-473f-9cbe-5de18a96c6f8", 00:11:22.002 "zoned": false 00:11:22.002 } 00:11:22.002 ] 00:11:22.002 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:11:22.002 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:22.002 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 769640ab-a369-421b-abc2-9605388685a2 00:11:22.262 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:22.262 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:22.262 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 769640ab-a369-421b-abc2-9605388685a2 00:11:22.520 13:05:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:22.520 13:05:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 435d90f6-965b-473f-9cbe-5de18a96c6f8 00:11:22.778 13:05:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 769640ab-a369-421b-abc2-9605388685a2 00:11:23.036 13:05:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:23.600 13:05:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:23.859 ************************************ 00:11:23.859 END TEST lvs_grow_clean 00:11:23.859 ************************************ 00:11:23.859 00:11:23.859 real 0m19.347s 00:11:23.859 user 0m18.674s 00:11:23.859 sys 0m2.256s 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:23.859 ************************************ 00:11:23.859 START TEST lvs_grow_dirty 00:11:23.859 ************************************ 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:23.859 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:24.118 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:24.118 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:24.376 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:24.376 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:24.376 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:24.635 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:24.635 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:24.635 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d lvol 150 00:11:25.236 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=273d9882-f308-4084-80dc-e80b40ffac1f 00:11:25.236 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:25.236 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:25.494 [2024-12-06 13:05:13.883451] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:25.494 [2024-12-06 13:05:13.883559] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:25.494 true 00:11:25.494 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:25.494 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:25.752 13:05:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:25.752 13:05:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:26.010 13:05:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 273d9882-f308-4084-80dc-e80b40ffac1f 00:11:26.267 13:05:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:11:26.526 [2024-12-06 13:05:15.056070] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:26.526 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:26.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:26.783 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66982 00:11:26.783 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:26.783 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:26.783 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66982 /var/tmp/bdevperf.sock 00:11:26.783 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66982 ']' 00:11:26.783 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:26.783 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.783 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:26.783 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.783 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:26.783 [2024-12-06 13:05:15.402255] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:26.783 [2024-12-06 13:05:15.402332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66982 ] 00:11:27.041 [2024-12-06 13:05:15.553131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.041 [2024-12-06 13:05:15.623886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.975 13:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.975 13:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:27.975 13:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:28.233 Nvme0n1 00:11:28.233 13:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:28.491 [ 00:11:28.491 { 00:11:28.491 "aliases": [ 00:11:28.491 "273d9882-f308-4084-80dc-e80b40ffac1f" 00:11:28.491 ], 00:11:28.491 "assigned_rate_limits": { 00:11:28.491 "r_mbytes_per_sec": 0, 00:11:28.491 "rw_ios_per_sec": 0, 00:11:28.491 "rw_mbytes_per_sec": 0, 00:11:28.491 "w_mbytes_per_sec": 0 00:11:28.491 }, 00:11:28.491 "block_size": 4096, 00:11:28.491 "claimed": false, 00:11:28.491 "driver_specific": { 00:11:28.491 "mp_policy": "active_passive", 00:11:28.491 "nvme": [ 00:11:28.491 { 00:11:28.491 "ctrlr_data": { 00:11:28.491 "ana_reporting": false, 00:11:28.491 "cntlid": 1, 00:11:28.491 "firmware_revision": "25.01", 00:11:28.491 "model_number": "SPDK bdev Controller", 00:11:28.491 "multi_ctrlr": true, 00:11:28.491 "oacs": { 00:11:28.491 "firmware": 0, 00:11:28.491 "format": 0, 00:11:28.491 "ns_manage": 0, 00:11:28.491 "security": 0 00:11:28.491 }, 00:11:28.491 "serial_number": "SPDK0", 00:11:28.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:28.491 "vendor_id": "0x8086" 00:11:28.491 }, 00:11:28.491 "ns_data": { 00:11:28.491 "can_share": true, 00:11:28.491 "id": 1 00:11:28.491 }, 00:11:28.491 "trid": { 00:11:28.491 "adrfam": "IPv4", 00:11:28.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:28.491 "traddr": "10.0.0.3", 00:11:28.491 "trsvcid": "4420", 00:11:28.491 "trtype": "TCP" 00:11:28.491 }, 00:11:28.491 "vs": { 00:11:28.491 "nvme_version": "1.3" 00:11:28.491 } 00:11:28.491 } 00:11:28.491 ] 00:11:28.491 }, 00:11:28.491 "memory_domains": [ 00:11:28.491 { 00:11:28.491 "dma_device_id": "system", 00:11:28.491 "dma_device_type": 1 00:11:28.491 } 00:11:28.491 ], 00:11:28.491 "name": "Nvme0n1", 00:11:28.491 "num_blocks": 38912, 00:11:28.491 "numa_id": -1, 00:11:28.491 "product_name": "NVMe disk", 00:11:28.491 "supported_io_types": { 00:11:28.491 "abort": true, 00:11:28.491 "compare": true, 00:11:28.491 "compare_and_write": true, 00:11:28.491 "copy": true, 00:11:28.491 "flush": true, 00:11:28.491 "get_zone_info": false, 00:11:28.491 "nvme_admin": true, 00:11:28.491 "nvme_io": true, 00:11:28.491 "nvme_io_md": false, 00:11:28.491 "nvme_iov_md": false, 00:11:28.491 "read": true, 00:11:28.491 "reset": true, 00:11:28.491 "seek_data": false, 00:11:28.491 "seek_hole": false, 00:11:28.491 "unmap": true, 00:11:28.491 "write": true, 00:11:28.491 "write_zeroes": true, 00:11:28.491 "zcopy": false, 00:11:28.491 "zone_append": false, 00:11:28.491 "zone_management": false 00:11:28.491 }, 00:11:28.491 "uuid": "273d9882-f308-4084-80dc-e80b40ffac1f", 00:11:28.491 "zoned": false 00:11:28.491 } 00:11:28.491 ] 00:11:28.491 13:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:28.491 13:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67035 00:11:28.491 13:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:28.756 Running I/O for 10 seconds... 00:11:29.691 Latency(us) 00:11:29.691 [2024-12-06T13:05:18.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:29.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:29.691 Nvme0n1 : 1.00 8192.00 32.00 0.00 0.00 0.00 0.00 0.00 00:11:29.691 [2024-12-06T13:05:18.331Z] =================================================================================================================== 00:11:29.691 [2024-12-06T13:05:18.331Z] Total : 8192.00 32.00 0.00 0.00 0.00 0.00 0.00 00:11:29.691 00:11:30.623 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:30.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.623 Nvme0n1 : 2.00 8136.50 31.78 0.00 0.00 0.00 0.00 0.00 00:11:30.623 [2024-12-06T13:05:19.263Z] =================================================================================================================== 00:11:30.623 [2024-12-06T13:05:19.263Z] Total : 8136.50 31.78 0.00 0.00 0.00 0.00 0.00 00:11:30.623 00:11:30.880 true 00:11:30.880 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:30.880 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:31.447 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:31.447 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:31.447 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 67035 00:11:31.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.704 Nvme0n1 : 3.00 8112.00 31.69 0.00 0.00 0.00 0.00 0.00 00:11:31.704 [2024-12-06T13:05:20.344Z] =================================================================================================================== 00:11:31.704 [2024-12-06T13:05:20.344Z] Total : 8112.00 31.69 0.00 0.00 0.00 0.00 0.00 00:11:31.704 00:11:32.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.639 Nvme0n1 : 4.00 7966.50 31.12 0.00 0.00 0.00 0.00 0.00 00:11:32.639 [2024-12-06T13:05:21.279Z] =================================================================================================================== 00:11:32.639 [2024-12-06T13:05:21.279Z] Total : 7966.50 31.12 0.00 0.00 0.00 0.00 0.00 00:11:32.639 00:11:33.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.574 Nvme0n1 : 5.00 7884.60 30.80 0.00 0.00 0.00 0.00 0.00 00:11:33.574 [2024-12-06T13:05:22.214Z] =================================================================================================================== 00:11:33.574 [2024-12-06T13:05:22.214Z] Total : 7884.60 30.80 0.00 0.00 0.00 0.00 0.00 00:11:33.574 00:11:34.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.949 Nvme0n1 : 6.00 7679.83 30.00 0.00 0.00 0.00 0.00 0.00 00:11:34.949 [2024-12-06T13:05:23.589Z] =================================================================================================================== 00:11:34.949 [2024-12-06T13:05:23.589Z] Total : 7679.83 30.00 0.00 0.00 0.00 0.00 0.00 00:11:34.949 00:11:35.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.883 Nvme0n1 : 7.00 7672.00 29.97 0.00 0.00 0.00 0.00 0.00 00:11:35.883 [2024-12-06T13:05:24.523Z] =================================================================================================================== 00:11:35.883 [2024-12-06T13:05:24.523Z] Total : 7672.00 29.97 0.00 0.00 0.00 0.00 0.00 00:11:35.883 00:11:36.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.816 Nvme0n1 : 8.00 7671.38 29.97 0.00 0.00 0.00 0.00 0.00 00:11:36.816 [2024-12-06T13:05:25.456Z] =================================================================================================================== 00:11:36.816 [2024-12-06T13:05:25.456Z] Total : 7671.38 29.97 0.00 0.00 0.00 0.00 0.00 00:11:36.816 00:11:37.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.750 Nvme0n1 : 9.00 7654.00 29.90 0.00 0.00 0.00 0.00 0.00 00:11:37.750 [2024-12-06T13:05:26.390Z] =================================================================================================================== 00:11:37.750 [2024-12-06T13:05:26.390Z] Total : 7654.00 29.90 0.00 0.00 0.00 0.00 0.00 00:11:37.750 00:11:38.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.707 Nvme0n1 : 10.00 7661.30 29.93 0.00 0.00 0.00 0.00 0.00 00:11:38.707 [2024-12-06T13:05:27.347Z] =================================================================================================================== 00:11:38.707 [2024-12-06T13:05:27.347Z] Total : 7661.30 29.93 0.00 0.00 0.00 0.00 0.00 00:11:38.707 00:11:38.707 00:11:38.707 Latency(us) 00:11:38.707 [2024-12-06T13:05:27.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.707 Nvme0n1 : 10.01 7666.58 29.95 0.00 0.00 16690.23 5689.72 179211.17 00:11:38.707 [2024-12-06T13:05:27.347Z] =================================================================================================================== 00:11:38.707 [2024-12-06T13:05:27.347Z] Total : 7666.58 29.95 0.00 0.00 16690.23 5689.72 179211.17 00:11:38.707 { 00:11:38.707 "results": [ 00:11:38.707 { 00:11:38.707 "job": "Nvme0n1", 00:11:38.707 "core_mask": "0x2", 00:11:38.707 "workload": "randwrite", 00:11:38.707 "status": "finished", 00:11:38.707 "queue_depth": 128, 00:11:38.707 "io_size": 4096, 00:11:38.707 "runtime": 10.009812, 00:11:38.707 "iops": 7666.577554103913, 00:11:38.707 "mibps": 29.94756857071841, 00:11:38.707 "io_failed": 0, 00:11:38.707 "io_timeout": 0, 00:11:38.707 "avg_latency_us": 16690.22992142401, 00:11:38.707 "min_latency_us": 5689.716363636364, 00:11:38.707 "max_latency_us": 179211.1709090909 00:11:38.707 } 00:11:38.707 ], 00:11:38.707 "core_count": 1 00:11:38.707 } 00:11:38.707 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66982 00:11:38.707 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 66982 ']' 00:11:38.707 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 66982 00:11:38.707 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:11:38.707 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.707 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66982 00:11:38.707 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:38.707 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:38.707 killing process with pid 66982 00:11:38.707 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66982' 00:11:38.707 Received shutdown signal, test time was about 10.000000 seconds 00:11:38.707 00:11:38.707 Latency(us) 00:11:38.707 [2024-12-06T13:05:27.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.707 [2024-12-06T13:05:27.347Z] =================================================================================================================== 00:11:38.707 [2024-12-06T13:05:27.347Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:38.707 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 66982 00:11:38.707 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 66982 00:11:38.966 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:39.224 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:39.483 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:39.483 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66367 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66367 00:11:39.741 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66367 Killed "${NVMF_APP[@]}" "$@" 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=67198 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 67198 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67198 ']' 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.741 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:40.000 [2024-12-06 13:05:28.447356] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:40.000 [2024-12-06 13:05:28.447527] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.000 [2024-12-06 13:05:28.613355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.258 [2024-12-06 13:05:28.673329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.258 [2024-12-06 13:05:28.673390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.258 [2024-12-06 13:05:28.673403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.258 [2024-12-06 13:05:28.673411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.258 [2024-12-06 13:05:28.673418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.258 [2024-12-06 13:05:28.673826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.825 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.825 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:40.826 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.826 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.826 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:40.826 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.826 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:41.085 [2024-12-06 13:05:29.668029] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:41.085 [2024-12-06 13:05:29.668338] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:41.085 [2024-12-06 13:05:29.668564] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:41.085 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:41.085 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 273d9882-f308-4084-80dc-e80b40ffac1f 00:11:41.085 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=273d9882-f308-4084-80dc-e80b40ffac1f 00:11:41.085 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.085 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:41.085 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.085 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.085 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:41.652 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 273d9882-f308-4084-80dc-e80b40ffac1f -t 2000 00:11:41.652 [ 00:11:41.652 { 00:11:41.652 "aliases": [ 00:11:41.652 "lvs/lvol" 00:11:41.652 ], 00:11:41.652 "assigned_rate_limits": { 00:11:41.652 "r_mbytes_per_sec": 0, 00:11:41.652 "rw_ios_per_sec": 0, 00:11:41.652 "rw_mbytes_per_sec": 0, 00:11:41.652 "w_mbytes_per_sec": 0 00:11:41.652 }, 00:11:41.652 "block_size": 4096, 00:11:41.652 "claimed": false, 00:11:41.652 "driver_specific": { 00:11:41.652 "lvol": { 00:11:41.652 "base_bdev": "aio_bdev", 00:11:41.652 "clone": false, 00:11:41.652 "esnap_clone": false, 00:11:41.652 "lvol_store_uuid": "78cbf56c-9025-48d1-bb5c-d48f663e6d2d", 00:11:41.652 "num_allocated_clusters": 38, 00:11:41.652 "snapshot": false, 00:11:41.652 "thin_provision": false 00:11:41.652 } 00:11:41.652 }, 00:11:41.652 "name": "273d9882-f308-4084-80dc-e80b40ffac1f", 00:11:41.652 "num_blocks": 38912, 00:11:41.652 "product_name": "Logical Volume", 00:11:41.652 "supported_io_types": { 00:11:41.652 "abort": false, 00:11:41.652 "compare": false, 00:11:41.652 "compare_and_write": false, 00:11:41.652 "copy": false, 00:11:41.652 "flush": false, 00:11:41.652 "get_zone_info": false, 00:11:41.652 "nvme_admin": false, 00:11:41.652 "nvme_io": false, 00:11:41.652 "nvme_io_md": false, 00:11:41.652 "nvme_iov_md": false, 00:11:41.652 "read": true, 00:11:41.652 "reset": true, 00:11:41.652 "seek_data": true, 00:11:41.652 "seek_hole": true, 00:11:41.652 "unmap": true, 00:11:41.652 "write": true, 00:11:41.652 "write_zeroes": true, 00:11:41.652 "zcopy": false, 00:11:41.652 "zone_append": false, 00:11:41.652 "zone_management": false 00:11:41.652 }, 00:11:41.652 "uuid": "273d9882-f308-4084-80dc-e80b40ffac1f", 00:11:41.652 "zoned": false 00:11:41.652 } 00:11:41.652 ] 00:11:41.652 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:41.653 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:41.653 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:41.912 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:41.912 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:41.912 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:42.170 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:42.170 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:42.429 [2024-12-06 13:05:31.029391] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:42.687 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:42.687 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:11:42.687 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:42.687 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.687 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.687 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.687 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.687 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.687 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.687 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.687 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:42.687 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:42.687 2024/12/06 13:05:31 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:78cbf56c-9025-48d1-bb5c-d48f663e6d2d], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:11:42.687 request: 00:11:42.687 { 00:11:42.687 "method": "bdev_lvol_get_lvstores", 00:11:42.687 "params": { 00:11:42.687 "uuid": "78cbf56c-9025-48d1-bb5c-d48f663e6d2d" 00:11:42.687 } 00:11:42.687 } 00:11:42.687 Got JSON-RPC error response 00:11:42.687 GoRPCClient: error on JSON-RPC call 00:11:42.944 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:11:42.944 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:42.944 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:42.944 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:42.944 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:43.202 aio_bdev 00:11:43.202 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 273d9882-f308-4084-80dc-e80b40ffac1f 00:11:43.202 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=273d9882-f308-4084-80dc-e80b40ffac1f 00:11:43.202 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.202 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:43.202 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.202 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.202 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:43.458 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 273d9882-f308-4084-80dc-e80b40ffac1f -t 2000 00:11:43.716 [ 00:11:43.716 { 00:11:43.716 "aliases": [ 00:11:43.716 "lvs/lvol" 00:11:43.716 ], 00:11:43.716 "assigned_rate_limits": { 00:11:43.716 "r_mbytes_per_sec": 0, 00:11:43.716 "rw_ios_per_sec": 0, 00:11:43.716 "rw_mbytes_per_sec": 0, 00:11:43.716 "w_mbytes_per_sec": 0 00:11:43.716 }, 00:11:43.716 "block_size": 4096, 00:11:43.716 "claimed": false, 00:11:43.716 "driver_specific": { 00:11:43.716 "lvol": { 00:11:43.716 "base_bdev": "aio_bdev", 00:11:43.716 "clone": false, 00:11:43.716 "esnap_clone": false, 00:11:43.716 "lvol_store_uuid": "78cbf56c-9025-48d1-bb5c-d48f663e6d2d", 00:11:43.716 "num_allocated_clusters": 38, 00:11:43.716 "snapshot": false, 00:11:43.716 "thin_provision": false 00:11:43.716 } 00:11:43.716 }, 00:11:43.716 "name": "273d9882-f308-4084-80dc-e80b40ffac1f", 00:11:43.716 "num_blocks": 38912, 00:11:43.716 "product_name": "Logical Volume", 00:11:43.716 "supported_io_types": { 00:11:43.716 "abort": false, 00:11:43.716 "compare": false, 00:11:43.716 "compare_and_write": false, 00:11:43.716 "copy": false, 00:11:43.716 "flush": false, 00:11:43.716 "get_zone_info": false, 00:11:43.716 "nvme_admin": false, 00:11:43.716 "nvme_io": false, 00:11:43.716 "nvme_io_md": false, 00:11:43.716 "nvme_iov_md": false, 00:11:43.716 "read": true, 00:11:43.716 "reset": true, 00:11:43.716 "seek_data": true, 00:11:43.716 "seek_hole": true, 00:11:43.716 "unmap": true, 00:11:43.716 "write": true, 00:11:43.716 "write_zeroes": true, 00:11:43.716 "zcopy": false, 00:11:43.716 "zone_append": false, 00:11:43.716 "zone_management": false 00:11:43.716 }, 00:11:43.716 "uuid": "273d9882-f308-4084-80dc-e80b40ffac1f", 00:11:43.716 "zoned": false 00:11:43.716 } 00:11:43.716 ] 00:11:43.716 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:43.716 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:43.716 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:43.975 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:43.975 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:43.975 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:44.238 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:44.238 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 273d9882-f308-4084-80dc-e80b40ffac1f 00:11:44.541 13:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 78cbf56c-9025-48d1-bb5c-d48f663e6d2d 00:11:44.799 13:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:45.365 13:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:45.622 ************************************ 00:11:45.622 END TEST lvs_grow_dirty 00:11:45.622 ************************************ 00:11:45.622 00:11:45.622 real 0m21.783s 00:11:45.622 user 0m45.119s 00:11:45.622 sys 0m8.009s 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:45.623 nvmf_trace.0 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.623 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:46.187 rmmod nvme_tcp 00:11:46.187 rmmod nvme_fabrics 00:11:46.187 rmmod nvme_keyring 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 67198 ']' 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 67198 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 67198 ']' 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 67198 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67198 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.187 killing process with pid 67198 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67198' 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 67198 00:11:46.187 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 67198 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:46.445 13:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:46.445 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:46.445 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:46.445 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:46.445 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:11:46.703 00:11:46.703 real 0m43.496s 00:11:46.703 user 1m10.777s 00:11:46.703 sys 0m11.240s 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:46.703 ************************************ 00:11:46.703 END TEST nvmf_lvs_grow 00:11:46.703 ************************************ 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:46.703 ************************************ 00:11:46.703 START TEST nvmf_bdev_io_wait 00:11:46.703 ************************************ 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:46.703 * Looking for test storage... 00:11:46.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.703 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:46.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.962 --rc genhtml_branch_coverage=1 00:11:46.962 --rc genhtml_function_coverage=1 00:11:46.962 --rc genhtml_legend=1 00:11:46.962 --rc geninfo_all_blocks=1 00:11:46.962 --rc geninfo_unexecuted_blocks=1 00:11:46.962 00:11:46.962 ' 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:46.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.962 --rc genhtml_branch_coverage=1 00:11:46.962 --rc genhtml_function_coverage=1 00:11:46.962 --rc genhtml_legend=1 00:11:46.962 --rc geninfo_all_blocks=1 00:11:46.962 --rc geninfo_unexecuted_blocks=1 00:11:46.962 00:11:46.962 ' 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:46.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.962 --rc genhtml_branch_coverage=1 00:11:46.962 --rc genhtml_function_coverage=1 00:11:46.962 --rc genhtml_legend=1 00:11:46.962 --rc geninfo_all_blocks=1 00:11:46.962 --rc geninfo_unexecuted_blocks=1 00:11:46.962 00:11:46.962 ' 00:11:46.962 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:46.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.963 --rc genhtml_branch_coverage=1 00:11:46.963 --rc genhtml_function_coverage=1 00:11:46.963 --rc genhtml_legend=1 00:11:46.963 --rc geninfo_all_blocks=1 00:11:46.963 --rc geninfo_unexecuted_blocks=1 00:11:46.963 00:11:46.963 ' 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.963 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:46.963 Cannot find device "nvmf_init_br" 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:46.963 Cannot find device "nvmf_init_br2" 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:46.963 Cannot find device "nvmf_tgt_br" 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:46.963 Cannot find device "nvmf_tgt_br2" 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:46.963 Cannot find device "nvmf_init_br" 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:11:46.963 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:46.963 Cannot find device "nvmf_init_br2" 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:46.964 Cannot find device "nvmf_tgt_br" 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:46.964 Cannot find device "nvmf_tgt_br2" 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:46.964 Cannot find device "nvmf_br" 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:46.964 Cannot find device "nvmf_init_if" 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:46.964 Cannot find device "nvmf_init_if2" 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:46.964 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:46.964 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:46.964 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:47.221 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:47.221 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:47.222 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:47.222 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:11:47.222 00:11:47.222 --- 10.0.0.3 ping statistics --- 00:11:47.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.222 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:47.222 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:47.222 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:11:47.222 00:11:47.222 --- 10.0.0.4 ping statistics --- 00:11:47.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.222 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:47.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:47.222 00:11:47.222 --- 10.0.0.1 ping statistics --- 00:11:47.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.222 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:47.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:11:47.222 00:11:47.222 --- 10.0.0.2 ping statistics --- 00:11:47.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.222 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67674 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67674 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 67674 ']' 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.222 13:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:47.479 [2024-12-06 13:05:35.889217] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:47.479 [2024-12-06 13:05:35.889320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.479 [2024-12-06 13:05:36.046977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.479 [2024-12-06 13:05:36.114006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.479 [2024-12-06 13:05:36.114289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.479 [2024-12-06 13:05:36.114531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.479 [2024-12-06 13:05:36.114791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.479 [2024-12-06 13:05:36.114921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.479 [2024-12-06 13:05:36.116264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.479 [2024-12-06 13:05:36.116518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.479 [2024-12-06 13:05:36.116335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.479 [2024-12-06 13:05:36.116387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.412 13:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.412 13:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:11:48.412 13:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:48.412 13:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:48.412 13:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.412 13:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.412 13:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:48.412 13:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.412 13:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.412 13:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.412 13:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:48.412 13:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.412 13:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.671 [2024-12-06 13:05:37.069114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.671 Malloc0 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.671 [2024-12-06 13:05:37.128233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67727 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:48.671 { 00:11:48.671 "params": { 00:11:48.671 "name": "Nvme$subsystem", 00:11:48.671 "trtype": "$TEST_TRANSPORT", 00:11:48.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.671 "adrfam": "ipv4", 00:11:48.671 "trsvcid": "$NVMF_PORT", 00:11:48.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.671 "hdgst": ${hdgst:-false}, 00:11:48.671 "ddgst": ${ddgst:-false} 00:11:48.671 }, 00:11:48.671 "method": "bdev_nvme_attach_controller" 00:11:48.671 } 00:11:48.671 EOF 00:11:48.671 )") 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:48.671 { 00:11:48.671 "params": { 00:11:48.671 "name": "Nvme$subsystem", 00:11:48.671 "trtype": "$TEST_TRANSPORT", 00:11:48.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.671 "adrfam": "ipv4", 00:11:48.671 "trsvcid": "$NVMF_PORT", 00:11:48.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.671 "hdgst": ${hdgst:-false}, 00:11:48.671 "ddgst": ${ddgst:-false} 00:11:48.671 }, 00:11:48.671 "method": "bdev_nvme_attach_controller" 00:11:48.671 } 00:11:48.671 EOF 00:11:48.671 )") 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67729 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67733 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:48.671 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:48.671 { 00:11:48.671 "params": { 00:11:48.671 "name": "Nvme$subsystem", 00:11:48.671 "trtype": "$TEST_TRANSPORT", 00:11:48.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.672 "adrfam": "ipv4", 00:11:48.672 "trsvcid": "$NVMF_PORT", 00:11:48.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.672 "hdgst": ${hdgst:-false}, 00:11:48.672 "ddgst": ${ddgst:-false} 00:11:48.672 }, 00:11:48.672 "method": "bdev_nvme_attach_controller" 00:11:48.672 } 00:11:48.672 EOF 00:11:48.672 )") 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67737 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:48.672 { 00:11:48.672 "params": { 00:11:48.672 "name": "Nvme$subsystem", 00:11:48.672 "trtype": "$TEST_TRANSPORT", 00:11:48.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.672 "adrfam": "ipv4", 00:11:48.672 "trsvcid": "$NVMF_PORT", 00:11:48.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.672 "hdgst": ${hdgst:-false}, 00:11:48.672 "ddgst": ${ddgst:-false} 00:11:48.672 }, 00:11:48.672 "method": "bdev_nvme_attach_controller" 00:11:48.672 } 00:11:48.672 EOF 00:11:48.672 )") 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:48.672 "params": { 00:11:48.672 "name": "Nvme1", 00:11:48.672 "trtype": "tcp", 00:11:48.672 "traddr": "10.0.0.3", 00:11:48.672 "adrfam": "ipv4", 00:11:48.672 "trsvcid": "4420", 00:11:48.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.672 "hdgst": false, 00:11:48.672 "ddgst": false 00:11:48.672 }, 00:11:48.672 "method": "bdev_nvme_attach_controller" 00:11:48.672 }' 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:48.672 "params": { 00:11:48.672 "name": "Nvme1", 00:11:48.672 "trtype": "tcp", 00:11:48.672 "traddr": "10.0.0.3", 00:11:48.672 "adrfam": "ipv4", 00:11:48.672 "trsvcid": "4420", 00:11:48.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.672 "hdgst": false, 00:11:48.672 "ddgst": false 00:11:48.672 }, 00:11:48.672 "method": "bdev_nvme_attach_controller" 00:11:48.672 }' 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:48.672 "params": { 00:11:48.672 "name": "Nvme1", 00:11:48.672 "trtype": "tcp", 00:11:48.672 "traddr": "10.0.0.3", 00:11:48.672 "adrfam": "ipv4", 00:11:48.672 "trsvcid": "4420", 00:11:48.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.672 "hdgst": false, 00:11:48.672 "ddgst": false 00:11:48.672 }, 00:11:48.672 "method": "bdev_nvme_attach_controller" 00:11:48.672 }' 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:48.672 "params": { 00:11:48.672 "name": "Nvme1", 00:11:48.672 "trtype": "tcp", 00:11:48.672 "traddr": "10.0.0.3", 00:11:48.672 "adrfam": "ipv4", 00:11:48.672 "trsvcid": "4420", 00:11:48.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.672 "hdgst": false, 00:11:48.672 "ddgst": false 00:11:48.672 }, 00:11:48.672 "method": "bdev_nvme_attach_controller" 00:11:48.672 }' 00:11:48.672 [2024-12-06 13:05:37.196889] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:48.672 [2024-12-06 13:05:37.196993] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:48.672 [2024-12-06 13:05:37.205740] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:48.672 [2024-12-06 13:05:37.205966] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:48.672 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67727 00:11:48.672 [2024-12-06 13:05:37.229793] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:48.672 [2024-12-06 13:05:37.230305] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:48.672 [2024-12-06 13:05:37.231912] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:48.672 [2024-12-06 13:05:37.231994] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:48.931 [2024-12-06 13:05:37.429519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.931 [2024-12-06 13:05:37.494259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:48.931 [2024-12-06 13:05:37.504921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.931 [2024-12-06 13:05:37.565114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:49.189 [2024-12-06 13:05:37.579267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.189 [2024-12-06 13:05:37.636954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:49.189 Running I/O for 1 seconds... 00:11:49.189 [2024-12-06 13:05:37.662159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.189 Running I/O for 1 seconds... 00:11:49.189 [2024-12-06 13:05:37.720608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:49.189 Running I/O for 1 seconds... 00:11:49.446 Running I/O for 1 seconds... 00:11:50.378 10107.00 IOPS, 39.48 MiB/s 00:11:50.378 Latency(us) 00:11:50.378 [2024-12-06T13:05:39.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.378 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:50.378 Nvme1n1 : 1.01 10164.23 39.70 0.00 0.00 12540.52 6285.50 19541.64 00:11:50.378 [2024-12-06T13:05:39.018Z] =================================================================================================================== 00:11:50.378 [2024-12-06T13:05:39.018Z] Total : 10164.23 39.70 0.00 0.00 12540.52 6285.50 19541.64 00:11:50.378 8737.00 IOPS, 34.13 MiB/s 00:11:50.378 Latency(us) 00:11:50.378 [2024-12-06T13:05:39.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.378 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:50.378 Nvme1n1 : 1.01 8816.33 34.44 0.00 0.00 14456.58 6196.13 23235.49 00:11:50.378 [2024-12-06T13:05:39.018Z] =================================================================================================================== 00:11:50.378 [2024-12-06T13:05:39.018Z] Total : 8816.33 34.44 0.00 0.00 14456.58 6196.13 23235.49 00:11:50.378 7410.00 IOPS, 28.95 MiB/s 00:11:50.378 Latency(us) 00:11:50.378 [2024-12-06T13:05:39.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.378 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:50.378 Nvme1n1 : 1.01 7468.53 29.17 0.00 0.00 17049.81 8102.63 26452.71 00:11:50.378 [2024-12-06T13:05:39.018Z] =================================================================================================================== 00:11:50.378 [2024-12-06T13:05:39.018Z] Total : 7468.53 29.17 0.00 0.00 17049.81 8102.63 26452.71 00:11:50.378 13:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67729 00:11:50.378 188704.00 IOPS, 737.12 MiB/s 00:11:50.378 Latency(us) 00:11:50.378 [2024-12-06T13:05:39.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.378 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:50.378 Nvme1n1 : 1.00 188343.98 735.72 0.00 0.00 675.96 294.17 1891.61 00:11:50.378 [2024-12-06T13:05:39.018Z] =================================================================================================================== 00:11:50.378 [2024-12-06T13:05:39.018Z] Total : 188343.98 735.72 0.00 0.00 675.96 294.17 1891.61 00:11:50.378 13:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67733 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67737 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:50.637 rmmod nvme_tcp 00:11:50.637 rmmod nvme_fabrics 00:11:50.637 rmmod nvme_keyring 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67674 ']' 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67674 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 67674 ']' 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 67674 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67674 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.637 killing process with pid 67674 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67674' 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 67674 00:11:50.637 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 67674 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:50.912 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:11:51.171 00:11:51.171 real 0m4.431s 00:11:51.171 user 0m17.607s 00:11:51.171 sys 0m2.221s 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:51.171 ************************************ 00:11:51.171 END TEST nvmf_bdev_io_wait 00:11:51.171 ************************************ 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:51.171 ************************************ 00:11:51.171 START TEST nvmf_queue_depth 00:11:51.171 ************************************ 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:51.171 * Looking for test storage... 00:11:51.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:11:51.171 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:51.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.431 --rc genhtml_branch_coverage=1 00:11:51.431 --rc genhtml_function_coverage=1 00:11:51.431 --rc genhtml_legend=1 00:11:51.431 --rc geninfo_all_blocks=1 00:11:51.431 --rc geninfo_unexecuted_blocks=1 00:11:51.431 00:11:51.431 ' 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:51.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.431 --rc genhtml_branch_coverage=1 00:11:51.431 --rc genhtml_function_coverage=1 00:11:51.431 --rc genhtml_legend=1 00:11:51.431 --rc geninfo_all_blocks=1 00:11:51.431 --rc geninfo_unexecuted_blocks=1 00:11:51.431 00:11:51.431 ' 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:51.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.431 --rc genhtml_branch_coverage=1 00:11:51.431 --rc genhtml_function_coverage=1 00:11:51.431 --rc genhtml_legend=1 00:11:51.431 --rc geninfo_all_blocks=1 00:11:51.431 --rc geninfo_unexecuted_blocks=1 00:11:51.431 00:11:51.431 ' 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:51.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.431 --rc genhtml_branch_coverage=1 00:11:51.431 --rc genhtml_function_coverage=1 00:11:51.431 --rc genhtml_legend=1 00:11:51.431 --rc geninfo_all_blocks=1 00:11:51.431 --rc geninfo_unexecuted_blocks=1 00:11:51.431 00:11:51.431 ' 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:51.431 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:51.432 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:51.432 Cannot find device "nvmf_init_br" 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:51.432 Cannot find device "nvmf_init_br2" 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:51.432 Cannot find device "nvmf_tgt_br" 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:51.432 Cannot find device "nvmf_tgt_br2" 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:51.432 Cannot find device "nvmf_init_br" 00:11:51.432 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:11:51.433 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:51.433 Cannot find device "nvmf_init_br2" 00:11:51.433 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:11:51.433 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:51.433 Cannot find device "nvmf_tgt_br" 00:11:51.433 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:11:51.433 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:51.433 Cannot find device "nvmf_tgt_br2" 00:11:51.433 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:11:51.433 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:51.433 Cannot find device "nvmf_br" 00:11:51.433 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:11:51.433 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:51.433 Cannot find device "nvmf_init_if" 00:11:51.433 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:11:51.433 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:51.433 Cannot find device "nvmf_init_if2" 00:11:51.433 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:11:51.433 13:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:51.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:51.433 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:11:51.433 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:51.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:51.433 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:11:51.433 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:51.433 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:51.433 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:51.433 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:51.433 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:51.433 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:51.433 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:51.692 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:51.692 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:11:51.692 00:11:51.692 --- 10.0.0.3 ping statistics --- 00:11:51.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.692 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:51.692 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:51.692 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:11:51.692 00:11:51.692 --- 10.0.0.4 ping statistics --- 00:11:51.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.692 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:51.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:11:51.692 00:11:51.692 --- 10.0.0.1 ping statistics --- 00:11:51.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.692 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:51.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:11:51.692 00:11:51.692 --- 10.0.0.2 ping statistics --- 00:11:51.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.692 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:51.692 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=68022 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 68022 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68022 ']' 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.693 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:51.952 [2024-12-06 13:05:40.355519] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:51.952 [2024-12-06 13:05:40.355646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.952 [2024-12-06 13:05:40.513576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.952 [2024-12-06 13:05:40.582395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.952 [2024-12-06 13:05:40.582679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.952 [2024-12-06 13:05:40.582779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.952 [2024-12-06 13:05:40.582893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.952 [2024-12-06 13:05:40.583000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.952 [2024-12-06 13:05:40.583587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 [2024-12-06 13:05:40.767003] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 Malloc0 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 [2024-12-06 13:05:40.821258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=68064 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 68064 /var/tmp/bdevperf.sock 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68064 ']' 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.211 13:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.469 [2024-12-06 13:05:40.889246] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:52.469 [2024-12-06 13:05:40.889348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68064 ] 00:11:52.469 [2024-12-06 13:05:41.043296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.727 [2024-12-06 13:05:41.115208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.293 13:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.293 13:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:53.293 13:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:53.293 13:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.293 13:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:53.550 NVMe0n1 00:11:53.550 13:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.550 13:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:53.550 Running I/O for 10 seconds... 00:11:55.577 7722.00 IOPS, 30.16 MiB/s [2024-12-06T13:05:45.149Z] 8194.50 IOPS, 32.01 MiB/s [2024-12-06T13:05:46.178Z] 8243.67 IOPS, 32.20 MiB/s [2024-12-06T13:05:47.550Z] 8277.00 IOPS, 32.33 MiB/s [2024-12-06T13:05:48.487Z] 8375.80 IOPS, 32.72 MiB/s [2024-12-06T13:05:49.422Z] 8369.50 IOPS, 32.69 MiB/s [2024-12-06T13:05:50.355Z] 8352.57 IOPS, 32.63 MiB/s [2024-12-06T13:05:51.288Z] 8427.75 IOPS, 32.92 MiB/s [2024-12-06T13:05:52.234Z] 8439.22 IOPS, 32.97 MiB/s [2024-12-06T13:05:52.500Z] 8486.70 IOPS, 33.15 MiB/s 00:12:03.860 Latency(us) 00:12:03.860 [2024-12-06T13:05:52.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.860 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:03.860 Verification LBA range: start 0x0 length 0x4000 00:12:03.860 NVMe0n1 : 10.09 8506.36 33.23 0.00 0.00 119794.10 30027.40 108670.60 00:12:03.860 [2024-12-06T13:05:52.500Z] =================================================================================================================== 00:12:03.860 [2024-12-06T13:05:52.500Z] Total : 8506.36 33.23 0.00 0.00 119794.10 30027.40 108670.60 00:12:03.860 { 00:12:03.860 "results": [ 00:12:03.860 { 00:12:03.860 "job": "NVMe0n1", 00:12:03.860 "core_mask": "0x1", 00:12:03.860 "workload": "verify", 00:12:03.860 "status": "finished", 00:12:03.860 "verify_range": { 00:12:03.860 "start": 0, 00:12:03.860 "length": 16384 00:12:03.860 }, 00:12:03.860 "queue_depth": 1024, 00:12:03.860 "io_size": 4096, 00:12:03.860 "runtime": 10.090923, 00:12:03.860 "iops": 8506.357644389913, 00:12:03.860 "mibps": 33.2279595483981, 00:12:03.860 "io_failed": 0, 00:12:03.860 "io_timeout": 0, 00:12:03.860 "avg_latency_us": 119794.09803960359, 00:12:03.860 "min_latency_us": 30027.403636363637, 00:12:03.860 "max_latency_us": 108670.60363636364 00:12:03.860 } 00:12:03.860 ], 00:12:03.860 "core_count": 1 00:12:03.860 } 00:12:03.860 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 68064 00:12:03.860 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68064 ']' 00:12:03.860 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68064 00:12:03.860 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:03.860 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.860 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68064 00:12:03.860 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.860 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.860 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68064' 00:12:03.860 killing process with pid 68064 00:12:03.860 Received shutdown signal, test time was about 10.000000 seconds 00:12:03.860 00:12:03.861 Latency(us) 00:12:03.861 [2024-12-06T13:05:52.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.861 [2024-12-06T13:05:52.501Z] =================================================================================================================== 00:12:03.861 [2024-12-06T13:05:52.501Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:03.861 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68064 00:12:03.861 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68064 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:04.119 rmmod nvme_tcp 00:12:04.119 rmmod nvme_fabrics 00:12:04.119 rmmod nvme_keyring 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 68022 ']' 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 68022 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68022 ']' 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68022 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68022 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68022' 00:12:04.119 killing process with pid 68022 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68022 00:12:04.119 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68022 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:04.378 13:05:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:04.378 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:04.636 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:04.636 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:04.636 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:04.636 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:12:04.637 00:12:04.637 real 0m13.471s 00:12:04.637 user 0m23.060s 00:12:04.637 sys 0m2.245s 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:04.637 ************************************ 00:12:04.637 END TEST nvmf_queue_depth 00:12:04.637 ************************************ 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:04.637 ************************************ 00:12:04.637 START TEST nvmf_target_multipath 00:12:04.637 ************************************ 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:04.637 * Looking for test storage... 00:12:04.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:12:04.637 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:04.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.896 --rc genhtml_branch_coverage=1 00:12:04.896 --rc genhtml_function_coverage=1 00:12:04.896 --rc genhtml_legend=1 00:12:04.896 --rc geninfo_all_blocks=1 00:12:04.896 --rc geninfo_unexecuted_blocks=1 00:12:04.896 00:12:04.896 ' 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:04.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.896 --rc genhtml_branch_coverage=1 00:12:04.896 --rc genhtml_function_coverage=1 00:12:04.896 --rc genhtml_legend=1 00:12:04.896 --rc geninfo_all_blocks=1 00:12:04.896 --rc geninfo_unexecuted_blocks=1 00:12:04.896 00:12:04.896 ' 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:04.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.896 --rc genhtml_branch_coverage=1 00:12:04.896 --rc genhtml_function_coverage=1 00:12:04.896 --rc genhtml_legend=1 00:12:04.896 --rc geninfo_all_blocks=1 00:12:04.896 --rc geninfo_unexecuted_blocks=1 00:12:04.896 00:12:04.896 ' 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:04.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.896 --rc genhtml_branch_coverage=1 00:12:04.896 --rc genhtml_function_coverage=1 00:12:04.896 --rc genhtml_legend=1 00:12:04.896 --rc geninfo_all_blocks=1 00:12:04.896 --rc geninfo_unexecuted_blocks=1 00:12:04.896 00:12:04.896 ' 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.896 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:04.897 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:04.897 Cannot find device "nvmf_init_br" 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:04.897 Cannot find device "nvmf_init_br2" 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:04.897 Cannot find device "nvmf_tgt_br" 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:04.897 Cannot find device "nvmf_tgt_br2" 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:04.897 Cannot find device "nvmf_init_br" 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:04.897 Cannot find device "nvmf_init_br2" 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:04.897 Cannot find device "nvmf_tgt_br" 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:04.897 Cannot find device "nvmf_tgt_br2" 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:12:04.897 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:04.898 Cannot find device "nvmf_br" 00:12:04.898 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:12:04.898 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:04.898 Cannot find device "nvmf_init_if" 00:12:04.898 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:12:04.898 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:04.898 Cannot find device "nvmf_init_if2" 00:12:04.898 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:12:04.898 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:04.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.898 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:12:04.898 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:04.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.898 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:12:04.898 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:04.898 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:05.156 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:05.156 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:12:05.156 00:12:05.156 --- 10.0.0.3 ping statistics --- 00:12:05.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.156 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:05.156 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:05.156 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:12:05.156 00:12:05.156 --- 10.0.0.4 ping statistics --- 00:12:05.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.156 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:05.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:05.156 00:12:05.156 --- 10.0.0.1 ping statistics --- 00:12:05.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.156 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:05.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:12:05.156 00:12:05.156 --- 10.0.0.2 ping statistics --- 00:12:05.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.156 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=68458 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 68458 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 68458 ']' 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.156 13:05:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:05.414 [2024-12-06 13:05:53.840745] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:05.414 [2024-12-06 13:05:53.840847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.414 [2024-12-06 13:05:53.996114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.672 [2024-12-06 13:05:54.068244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.672 [2024-12-06 13:05:54.068310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.672 [2024-12-06 13:05:54.068326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.672 [2024-12-06 13:05:54.068337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.672 [2024-12-06 13:05:54.068346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.672 [2024-12-06 13:05:54.073509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.672 [2024-12-06 13:05:54.073675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.672 [2024-12-06 13:05:54.073784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.672 [2024-12-06 13:05:54.073793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.672 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.672 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:12:05.672 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:05.672 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:05.672 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:05.672 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.672 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:05.930 [2024-12-06 13:05:54.558481] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.187 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:12:06.445 Malloc0 00:12:06.445 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:12:06.702 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:06.959 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:07.216 [2024-12-06 13:05:55.700703] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:07.216 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:12:07.474 [2024-12-06 13:05:55.964907] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:12:07.474 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:12:07.771 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:12:08.028 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.028 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:12:08.028 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.028 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:08.028 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:09.923 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:09.924 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:09.924 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:09.924 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:09.924 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:12:09.924 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68587 00:12:09.924 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:09.924 13:05:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:12:09.924 [global] 00:12:09.924 thread=1 00:12:09.924 invalidate=1 00:12:09.924 rw=randrw 00:12:09.924 time_based=1 00:12:09.924 runtime=6 00:12:09.924 ioengine=libaio 00:12:09.924 direct=1 00:12:09.924 bs=4096 00:12:09.924 iodepth=128 00:12:09.924 norandommap=0 00:12:09.924 numjobs=1 00:12:09.924 00:12:09.924 verify_dump=1 00:12:09.924 verify_backlog=512 00:12:09.924 verify_state_save=0 00:12:09.924 do_verify=1 00:12:09.924 verify=crc32c-intel 00:12:09.924 [job0] 00:12:09.924 filename=/dev/nvme0n1 00:12:09.924 Could not set queue depth (nvme0n1) 00:12:10.181 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:10.181 fio-3.35 00:12:10.181 Starting 1 thread 00:12:11.115 13:05:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:11.372 13:05:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:12:11.630 13:06:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:12:11.630 13:06:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:11.630 13:06:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:11.630 13:06:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:11.630 13:06:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:11.630 13:06:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:11.630 13:06:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:12:11.630 13:06:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:11.630 13:06:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:11.630 13:06:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:11.630 13:06:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:11.630 13:06:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:11.630 13:06:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:12:12.564 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:12.564 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:12.564 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:12.564 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:12.823 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:12:13.081 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:12:13.081 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:13.081 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:13.081 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:13.081 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:13.081 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:13.081 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:12:13.081 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:13.081 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:13.081 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:13.081 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:13.081 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:13.081 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:12:14.030 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:14.030 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:14.030 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:14.030 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68587 00:12:16.561 00:12:16.561 job0: (groupid=0, jobs=1): err= 0: pid=68613: Fri Dec 6 13:06:04 2024 00:12:16.561 read: IOPS=11.1k, BW=43.3MiB/s (45.4MB/s)(260MiB/6004msec) 00:12:16.561 slat (usec): min=3, max=5072, avg=52.03, stdev=235.19 00:12:16.561 clat (usec): min=366, max=14873, avg=7888.20, stdev=1182.71 00:12:16.561 lat (usec): min=379, max=14883, avg=7940.23, stdev=1193.21 00:12:16.562 clat percentiles (usec): 00:12:16.562 | 1.00th=[ 4686], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 7177], 00:12:16.562 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8029], 00:12:16.562 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[ 9110], 95.00th=[ 9765], 00:12:16.562 | 99.00th=[11600], 99.50th=[12125], 99.90th=[12780], 99.95th=[13042], 00:12:16.562 | 99.99th=[14222] 00:12:16.562 bw ( KiB/s): min=10712, max=29808, per=52.17%, avg=23123.64, stdev=6627.31, samples=11 00:12:16.562 iops : min= 2678, max= 7452, avg=5780.91, stdev=1656.83, samples=11 00:12:16.562 write: IOPS=6638, BW=25.9MiB/s (27.2MB/s)(136MiB/5246msec); 0 zone resets 00:12:16.562 slat (usec): min=7, max=2363, avg=62.90, stdev=160.92 00:12:16.562 clat (usec): min=580, max=14042, avg=6802.70, stdev=981.59 00:12:16.562 lat (usec): min=659, max=14065, avg=6865.60, stdev=984.98 00:12:16.562 clat percentiles (usec): 00:12:16.562 | 1.00th=[ 3654], 5.00th=[ 4948], 10.00th=[ 5800], 20.00th=[ 6259], 00:12:16.562 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7046], 00:12:16.562 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7963], 00:12:16.562 | 99.00th=[ 9634], 99.50th=[10290], 99.90th=[11994], 99.95th=[12256], 00:12:16.562 | 99.99th=[12911] 00:12:16.562 bw ( KiB/s): min=10968, max=29504, per=87.27%, avg=23175.27, stdev=6302.09, samples=11 00:12:16.562 iops : min= 2742, max= 7376, avg=5793.82, stdev=1575.52, samples=11 00:12:16.562 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:16.562 lat (msec) : 2=0.04%, 4=0.78%, 10=96.11%, 20=3.05% 00:12:16.562 cpu : usr=5.98%, sys=22.49%, ctx=6608, majf=0, minf=114 00:12:16.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:16.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:16.562 issued rwts: total=66531,34828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.562 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:16.562 00:12:16.562 Run status group 0 (all jobs): 00:12:16.562 READ: bw=43.3MiB/s (45.4MB/s), 43.3MiB/s-43.3MiB/s (45.4MB/s-45.4MB/s), io=260MiB (273MB), run=6004-6004msec 00:12:16.562 WRITE: bw=25.9MiB/s (27.2MB/s), 25.9MiB/s-25.9MiB/s (27.2MB/s-27.2MB/s), io=136MiB (143MB), run=5246-5246msec 00:12:16.562 00:12:16.562 Disk stats (read/write): 00:12:16.562 nvme0n1: ios=65585/34158, merge=0/0, ticks=484492/217339, in_queue=701831, util=98.61% 00:12:16.562 13:06:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:12:16.562 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:12:16.820 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:12:16.820 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:16.820 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:16.820 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:16.820 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:16.820 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:16.820 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:12:16.820 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:16.820 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:16.820 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:16.820 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:16.820 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:12:16.820 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:12:17.822 13:06:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:17.822 13:06:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:17.822 13:06:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:17.822 13:06:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:12:17.822 13:06:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68746 00:12:17.822 13:06:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:17.822 13:06:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:12:17.822 [global] 00:12:17.822 thread=1 00:12:17.822 invalidate=1 00:12:17.822 rw=randrw 00:12:17.822 time_based=1 00:12:17.822 runtime=6 00:12:17.822 ioengine=libaio 00:12:17.822 direct=1 00:12:17.822 bs=4096 00:12:17.822 iodepth=128 00:12:17.822 norandommap=0 00:12:17.822 numjobs=1 00:12:17.822 00:12:17.822 verify_dump=1 00:12:17.822 verify_backlog=512 00:12:17.822 verify_state_save=0 00:12:17.822 do_verify=1 00:12:17.822 verify=crc32c-intel 00:12:17.822 [job0] 00:12:17.822 filename=/dev/nvme0n1 00:12:17.822 Could not set queue depth (nvme0n1) 00:12:18.081 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:18.081 fio-3.35 00:12:18.081 Starting 1 thread 00:12:19.016 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:19.275 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:12:19.532 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:12:19.533 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:19.533 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:19.533 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:19.533 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:19.533 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:19.533 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:12:19.533 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:19.533 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:19.533 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:19.533 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:19.533 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:19.533 13:06:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:12:20.466 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:20.466 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:20.466 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:20.466 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:20.723 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:12:21.287 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:12:21.287 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:21.287 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:21.287 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:21.287 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:21.287 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:21.287 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:12:21.288 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:21.288 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:21.288 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:21.288 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:21.288 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:21.288 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:12:22.218 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:22.218 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:22.218 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:22.218 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68746 00:12:24.116 00:12:24.116 job0: (groupid=0, jobs=1): err= 0: pid=68767: Fri Dec 6 13:06:12 2024 00:12:24.116 read: IOPS=12.0k, BW=46.9MiB/s (49.2MB/s)(282MiB/6004msec) 00:12:24.116 slat (usec): min=2, max=7835, avg=42.34, stdev=223.12 00:12:24.116 clat (usec): min=297, max=21240, avg=7430.69, stdev=2104.92 00:12:24.116 lat (usec): min=308, max=21273, avg=7473.04, stdev=2125.50 00:12:24.116 clat percentiles (usec): 00:12:24.116 | 1.00th=[ 1926], 5.00th=[ 3818], 10.00th=[ 4555], 20.00th=[ 5800], 00:12:24.116 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7832], 00:12:24.116 | 70.00th=[ 8356], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[10683], 00:12:24.116 | 99.00th=[12780], 99.50th=[14746], 99.90th=[16909], 99.95th=[18220], 00:12:24.116 | 99.99th=[20055] 00:12:24.116 bw ( KiB/s): min=12184, max=40032, per=53.71%, avg=25795.64, stdev=8771.54, samples=11 00:12:24.116 iops : min= 3046, max=10008, avg=6448.91, stdev=2192.88, samples=11 00:12:24.116 write: IOPS=7181, BW=28.1MiB/s (29.4MB/s)(147MiB/5243msec); 0 zone resets 00:12:24.116 slat (usec): min=4, max=5619, avg=50.72, stdev=139.62 00:12:24.116 clat (usec): min=157, max=16864, avg=6003.69, stdev=1816.68 00:12:24.116 lat (usec): min=218, max=16893, avg=6054.41, stdev=1831.87 00:12:24.116 clat percentiles (usec): 00:12:24.116 | 1.00th=[ 2057], 5.00th=[ 2900], 10.00th=[ 3392], 20.00th=[ 4228], 00:12:24.116 | 30.00th=[ 5014], 40.00th=[ 5997], 50.00th=[ 6456], 60.00th=[ 6783], 00:12:24.116 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7767], 95.00th=[ 8455], 00:12:24.116 | 99.00th=[10290], 99.50th=[11338], 99.90th=[13960], 99.95th=[14222], 00:12:24.116 | 99.99th=[15008] 00:12:24.116 bw ( KiB/s): min=12752, max=39424, per=89.65%, avg=25754.18, stdev=8576.57, samples=11 00:12:24.116 iops : min= 3188, max= 9856, avg=6438.55, stdev=2144.14, samples=11 00:12:24.116 lat (usec) : 250=0.01%, 500=0.04%, 750=0.08%, 1000=0.14% 00:12:24.116 lat (msec) : 2=0.75%, 4=8.77%, 10=84.79%, 20=5.41%, 50=0.02% 00:12:24.116 cpu : usr=6.28%, sys=22.84%, ctx=7263, majf=0, minf=102 00:12:24.116 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:24.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:24.116 issued rwts: total=72090,37654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:24.116 00:12:24.116 Run status group 0 (all jobs): 00:12:24.116 READ: bw=46.9MiB/s (49.2MB/s), 46.9MiB/s-46.9MiB/s (49.2MB/s-49.2MB/s), io=282MiB (295MB), run=6004-6004msec 00:12:24.116 WRITE: bw=28.1MiB/s (29.4MB/s), 28.1MiB/s-28.1MiB/s (29.4MB/s-29.4MB/s), io=147MiB (154MB), run=5243-5243msec 00:12:24.116 00:12:24.116 Disk stats (read/write): 00:12:24.116 nvme0n1: ios=70524/37654, merge=0/0, ticks=492041/210352, in_queue=702393, util=98.56% 00:12:24.116 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:24.116 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.116 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:12:24.116 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:24.116 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.374 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.374 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:24.374 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:12:24.374 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.633 rmmod nvme_tcp 00:12:24.633 rmmod nvme_fabrics 00:12:24.633 rmmod nvme_keyring 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 68458 ']' 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 68458 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 68458 ']' 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 68458 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68458 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.633 killing process with pid 68458 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68458' 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 68458 00:12:24.633 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 68458 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:24.891 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:12:25.148 00:12:25.148 real 0m20.496s 00:12:25.148 user 1m19.759s 00:12:25.148 sys 0m6.438s 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:25.148 ************************************ 00:12:25.148 END TEST nvmf_target_multipath 00:12:25.148 ************************************ 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:25.148 ************************************ 00:12:25.148 START TEST nvmf_zcopy 00:12:25.148 ************************************ 00:12:25.148 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:25.409 * Looking for test storage... 00:12:25.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:25.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.409 --rc genhtml_branch_coverage=1 00:12:25.409 --rc genhtml_function_coverage=1 00:12:25.409 --rc genhtml_legend=1 00:12:25.409 --rc geninfo_all_blocks=1 00:12:25.409 --rc geninfo_unexecuted_blocks=1 00:12:25.409 00:12:25.409 ' 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:25.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.409 --rc genhtml_branch_coverage=1 00:12:25.409 --rc genhtml_function_coverage=1 00:12:25.409 --rc genhtml_legend=1 00:12:25.409 --rc geninfo_all_blocks=1 00:12:25.409 --rc geninfo_unexecuted_blocks=1 00:12:25.409 00:12:25.409 ' 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:25.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.409 --rc genhtml_branch_coverage=1 00:12:25.409 --rc genhtml_function_coverage=1 00:12:25.409 --rc genhtml_legend=1 00:12:25.409 --rc geninfo_all_blocks=1 00:12:25.409 --rc geninfo_unexecuted_blocks=1 00:12:25.409 00:12:25.409 ' 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:25.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.409 --rc genhtml_branch_coverage=1 00:12:25.409 --rc genhtml_function_coverage=1 00:12:25.409 --rc genhtml_legend=1 00:12:25.409 --rc geninfo_all_blocks=1 00:12:25.409 --rc geninfo_unexecuted_blocks=1 00:12:25.409 00:12:25.409 ' 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.409 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.410 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:25.410 Cannot find device "nvmf_init_br" 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:25.410 Cannot find device "nvmf_init_br2" 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:25.410 Cannot find device "nvmf_tgt_br" 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.410 Cannot find device "nvmf_tgt_br2" 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:12:25.410 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:25.410 Cannot find device "nvmf_init_br" 00:12:25.410 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:12:25.410 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:25.410 Cannot find device "nvmf_init_br2" 00:12:25.410 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:12:25.410 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:25.410 Cannot find device "nvmf_tgt_br" 00:12:25.410 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:12:25.410 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:25.410 Cannot find device "nvmf_tgt_br2" 00:12:25.410 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:12:25.410 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:25.410 Cannot find device "nvmf_br" 00:12:25.669 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:12:25.669 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:25.669 Cannot find device "nvmf_init_if" 00:12:25.669 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:12:25.669 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:25.669 Cannot find device "nvmf_init_if2" 00:12:25.669 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:12:25.669 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.669 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:12:25.669 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:25.670 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:25.928 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:25.928 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:12:25.928 00:12:25.928 --- 10.0.0.3 ping statistics --- 00:12:25.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.928 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:25.928 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:25.928 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:12:25.928 00:12:25.928 --- 10.0.0.4 ping statistics --- 00:12:25.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.928 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:25.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:25.928 00:12:25.928 --- 10.0.0.1 ping statistics --- 00:12:25.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.928 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:25.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:12:25.928 00:12:25.928 --- 10.0.0.2 ping statistics --- 00:12:25.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.928 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:25.928 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:25.929 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:25.929 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.929 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:25.929 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=69102 00:12:25.929 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:25.929 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 69102 00:12:25.929 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 69102 ']' 00:12:25.929 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.929 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.929 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.929 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.929 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:25.929 [2024-12-06 13:06:14.420863] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:25.929 [2024-12-06 13:06:14.420944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.188 [2024-12-06 13:06:14.570754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.188 [2024-12-06 13:06:14.638195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.188 [2024-12-06 13:06:14.638246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.188 [2024-12-06 13:06:14.638261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.188 [2024-12-06 13:06:14.638271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.188 [2024-12-06 13:06:14.638281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.188 [2024-12-06 13:06:14.638747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:27.122 [2024-12-06 13:06:15.498205] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:27.122 [2024-12-06 13:06:15.518336] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:27.122 malloc0 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.122 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:27.123 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:27.123 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:27.123 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:27.123 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:27.123 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:27.123 { 00:12:27.123 "params": { 00:12:27.123 "name": "Nvme$subsystem", 00:12:27.123 "trtype": "$TEST_TRANSPORT", 00:12:27.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:27.123 "adrfam": "ipv4", 00:12:27.123 "trsvcid": "$NVMF_PORT", 00:12:27.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:27.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:27.123 "hdgst": ${hdgst:-false}, 00:12:27.123 "ddgst": ${ddgst:-false} 00:12:27.123 }, 00:12:27.123 "method": "bdev_nvme_attach_controller" 00:12:27.123 } 00:12:27.123 EOF 00:12:27.123 )") 00:12:27.123 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:27.123 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:27.123 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:27.123 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:27.123 "params": { 00:12:27.123 "name": "Nvme1", 00:12:27.123 "trtype": "tcp", 00:12:27.123 "traddr": "10.0.0.3", 00:12:27.123 "adrfam": "ipv4", 00:12:27.123 "trsvcid": "4420", 00:12:27.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:27.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:27.123 "hdgst": false, 00:12:27.123 "ddgst": false 00:12:27.123 }, 00:12:27.123 "method": "bdev_nvme_attach_controller" 00:12:27.123 }' 00:12:27.123 [2024-12-06 13:06:15.622249] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:27.123 [2024-12-06 13:06:15.622340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69153 ] 00:12:27.380 [2024-12-06 13:06:15.775102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.380 [2024-12-06 13:06:15.838374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.637 Running I/O for 10 seconds... 00:12:29.518 5934.00 IOPS, 46.36 MiB/s [2024-12-06T13:06:19.093Z] 5809.50 IOPS, 45.39 MiB/s [2024-12-06T13:06:20.054Z] 5794.33 IOPS, 45.27 MiB/s [2024-12-06T13:06:21.448Z] 5877.50 IOPS, 45.92 MiB/s [2024-12-06T13:06:22.381Z] 5935.00 IOPS, 46.37 MiB/s [2024-12-06T13:06:23.314Z] 5952.83 IOPS, 46.51 MiB/s [2024-12-06T13:06:24.271Z] 5961.29 IOPS, 46.57 MiB/s [2024-12-06T13:06:25.200Z] 5966.62 IOPS, 46.61 MiB/s [2024-12-06T13:06:26.130Z] 5977.11 IOPS, 46.70 MiB/s [2024-12-06T13:06:26.130Z] 5989.40 IOPS, 46.79 MiB/s 00:12:37.490 Latency(us) 00:12:37.490 [2024-12-06T13:06:26.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.490 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:37.490 Verification LBA range: start 0x0 length 0x1000 00:12:37.490 Nvme1n1 : 10.01 5993.55 46.82 0.00 0.00 21286.74 2815.07 33363.78 00:12:37.490 [2024-12-06T13:06:26.130Z] =================================================================================================================== 00:12:37.490 [2024-12-06T13:06:26.130Z] Total : 5993.55 46.82 0.00 0.00 21286.74 2815.07 33363.78 00:12:37.747 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69270 00:12:37.747 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:37.747 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:37.747 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:37.747 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:37.747 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:37.747 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:37.747 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:37.747 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:37.747 { 00:12:37.747 "params": { 00:12:37.747 "name": "Nvme$subsystem", 00:12:37.747 "trtype": "$TEST_TRANSPORT", 00:12:37.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.747 "adrfam": "ipv4", 00:12:37.747 "trsvcid": "$NVMF_PORT", 00:12:37.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.747 "hdgst": ${hdgst:-false}, 00:12:37.747 "ddgst": ${ddgst:-false} 00:12:37.747 }, 00:12:37.747 "method": "bdev_nvme_attach_controller" 00:12:37.747 } 00:12:37.747 EOF 00:12:37.747 )") 00:12:37.747 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:37.747 [2024-12-06 13:06:26.247217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.747 [2024-12-06 13:06:26.247261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.747 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:37.747 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:37.747 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:37.747 "params": { 00:12:37.747 "name": "Nvme1", 00:12:37.747 "trtype": "tcp", 00:12:37.747 "traddr": "10.0.0.3", 00:12:37.747 "adrfam": "ipv4", 00:12:37.747 "trsvcid": "4420", 00:12:37.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.747 "hdgst": false, 00:12:37.747 "ddgst": false 00:12:37.747 }, 00:12:37.747 "method": "bdev_nvme_attach_controller" 00:12:37.747 }' 00:12:37.747 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.747 [2024-12-06 13:06:26.259178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.747 [2024-12-06 13:06:26.259205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.747 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.747 [2024-12-06 13:06:26.271178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.747 [2024-12-06 13:06:26.271203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.747 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.747 [2024-12-06 13:06:26.283212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.747 [2024-12-06 13:06:26.283245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.747 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.747 [2024-12-06 13:06:26.289009] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:37.747 [2024-12-06 13:06:26.289080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69270 ] 00:12:37.747 [2024-12-06 13:06:26.295201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.747 [2024-12-06 13:06:26.295231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.747 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.747 [2024-12-06 13:06:26.307191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.747 [2024-12-06 13:06:26.307215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.747 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.747 [2024-12-06 13:06:26.319192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.747 [2024-12-06 13:06:26.319216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.747 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.747 [2024-12-06 13:06:26.331214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.747 [2024-12-06 13:06:26.331243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.747 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.747 [2024-12-06 13:06:26.343204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.747 [2024-12-06 13:06:26.343234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.747 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.747 [2024-12-06 13:06:26.355201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.747 [2024-12-06 13:06:26.355226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.747 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.748 [2024-12-06 13:06:26.367210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.748 [2024-12-06 13:06:26.367241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.748 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.748 [2024-12-06 13:06:26.379210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.748 [2024-12-06 13:06:26.379236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.748 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.008 [2024-12-06 13:06:26.391226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.008 [2024-12-06 13:06:26.391260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.008 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.008 [2024-12-06 13:06:26.403215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.008 [2024-12-06 13:06:26.403240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.008 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.008 [2024-12-06 13:06:26.415214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.008 [2024-12-06 13:06:26.415238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.008 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.008 [2024-12-06 13:06:26.427236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.008 [2024-12-06 13:06:26.427266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.008 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.008 [2024-12-06 13:06:26.431821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.008 [2024-12-06 13:06:26.439234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.008 [2024-12-06 13:06:26.439265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.008 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.008 [2024-12-06 13:06:26.451234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.008 [2024-12-06 13:06:26.451265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.008 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.008 [2024-12-06 13:06:26.463245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.008 [2024-12-06 13:06:26.463277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.475247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.475275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.487235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.487261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.499237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.499263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.503794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.009 [2024-12-06 13:06:26.511233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.511256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.523279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.523313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.535282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.535317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.547309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.547352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.559295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.559334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.571301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.571342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.583308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.583350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.595303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.595340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.607297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.607334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.619300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.619338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.631309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.631352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.009 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.009 [2024-12-06 13:06:26.643343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.009 [2024-12-06 13:06:26.643387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.267 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.267 [2024-12-06 13:06:26.655331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.267 [2024-12-06 13:06:26.655377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.267 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.267 [2024-12-06 13:06:26.667383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.267 [2024-12-06 13:06:26.667434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.267 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.267 [2024-12-06 13:06:26.679332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.267 [2024-12-06 13:06:26.679372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.267 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.267 [2024-12-06 13:06:26.691343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.267 [2024-12-06 13:06:26.691389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.267 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.267 Running I/O for 5 seconds... 00:12:38.267 [2024-12-06 13:06:26.703340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.267 [2024-12-06 13:06:26.703380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.267 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.267 [2024-12-06 13:06:26.720973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.267 [2024-12-06 13:06:26.721024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.267 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.267 [2024-12-06 13:06:26.738807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.268 [2024-12-06 13:06:26.738863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.268 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.268 [2024-12-06 13:06:26.754173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.268 [2024-12-06 13:06:26.754222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.268 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.268 [2024-12-06 13:06:26.764927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.268 [2024-12-06 13:06:26.764970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.268 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.268 [2024-12-06 13:06:26.779381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.268 [2024-12-06 13:06:26.779435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.268 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.268 [2024-12-06 13:06:26.791585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.268 [2024-12-06 13:06:26.791630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.268 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.268 [2024-12-06 13:06:26.808952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.268 [2024-12-06 13:06:26.808993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.268 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.268 [2024-12-06 13:06:26.824306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.268 [2024-12-06 13:06:26.824366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.268 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.268 [2024-12-06 13:06:26.841088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.268 [2024-12-06 13:06:26.841125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.268 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.268 [2024-12-06 13:06:26.857591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.268 [2024-12-06 13:06:26.857625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.268 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.268 [2024-12-06 13:06:26.874637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.268 [2024-12-06 13:06:26.874671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.268 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.268 [2024-12-06 13:06:26.889646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.268 [2024-12-06 13:06:26.889678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.268 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:26.906530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:26.906563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:26.921499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:26.921530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:26.936531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:26.936563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:26.953182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:26.953214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:26.969569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:26.969600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:26.986250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:26.986282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:27.001296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:27.001328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:27.018110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:27.018160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:27.033205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:27.033254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:27.049144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:27.049238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:27.066289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:27.066336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:27.082045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:27.082106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:27.092104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:27.092152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:27.106526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:27.106574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:27.124462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:27.124533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:27.140403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:27.140474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.527 [2024-12-06 13:06:27.157843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.527 [2024-12-06 13:06:27.157898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.527 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.173600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.173644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.184329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.184360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.199011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.199043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.215976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.216009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.231986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.232018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.248967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.248998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.264660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.264690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.277259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.277303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.294077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.294119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.309290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.309338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.322257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.322297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.332449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.332490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.347008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.347042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.358941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.358974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.376681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.376715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.387221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.387253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.399545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.399591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.785 [2024-12-06 13:06:27.409929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.785 [2024-12-06 13:06:27.409970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.785 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.424595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.424627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.044 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.440505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.440535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.044 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.457997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.458032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.044 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.473435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.473509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.044 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.485454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.485513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.044 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.500954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.500986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.044 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.517735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.517770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.044 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.532961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.532994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.044 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.550287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.550322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.044 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.565548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.565578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.044 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.575924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.575962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.044 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.590801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.590831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.044 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.607931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.607958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.044 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.044 [2024-12-06 13:06:27.624309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.044 [2024-12-06 13:06:27.624355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.045 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.045 [2024-12-06 13:06:27.640979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.045 [2024-12-06 13:06:27.641011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.045 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.045 [2024-12-06 13:06:27.651373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.045 [2024-12-06 13:06:27.651418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.045 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.045 [2024-12-06 13:06:27.662120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.045 [2024-12-06 13:06:27.662150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.045 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.045 [2024-12-06 13:06:27.675052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.045 [2024-12-06 13:06:27.675083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.045 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.303 [2024-12-06 13:06:27.693709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.303 [2024-12-06 13:06:27.693738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.303 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.303 11660.00 IOPS, 91.09 MiB/s [2024-12-06T13:06:27.943Z] [2024-12-06 13:06:27.708398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.708442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.718626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.718655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.733276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.733323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.749786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.749818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.766309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.766342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.781853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.781885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.797338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.797383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.809755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.809802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.827289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.827324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.841863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.841896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.851797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.851830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.866324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.866356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.882550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.882582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.900170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.900202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.915329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.915360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.304 [2024-12-06 13:06:27.931982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.304 [2024-12-06 13:06:27.932014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.304 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.563 [2024-12-06 13:06:27.946955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.563 [2024-12-06 13:06:27.946986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.563 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.563 [2024-12-06 13:06:27.961738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.563 [2024-12-06 13:06:27.961772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.563 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.563 [2024-12-06 13:06:27.977799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.563 [2024-12-06 13:06:27.977833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.563 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.563 [2024-12-06 13:06:27.994885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.563 [2024-12-06 13:06:27.994918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.563 2024/12/06 13:06:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.563 [2024-12-06 13:06:28.010051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.563 [2024-12-06 13:06:28.010091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.563 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.563 [2024-12-06 13:06:28.026467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.563 [2024-12-06 13:06:28.026497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.563 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.563 [2024-12-06 13:06:28.036629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.563 [2024-12-06 13:06:28.036676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.564 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.564 [2024-12-06 13:06:28.051242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.564 [2024-12-06 13:06:28.051273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.564 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.564 [2024-12-06 13:06:28.067523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.564 [2024-12-06 13:06:28.067553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.564 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.564 [2024-12-06 13:06:28.084139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.564 [2024-12-06 13:06:28.084186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.564 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.564 [2024-12-06 13:06:28.100969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.564 [2024-12-06 13:06:28.101000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.564 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.564 [2024-12-06 13:06:28.116671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.564 [2024-12-06 13:06:28.116702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.564 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.564 [2024-12-06 13:06:28.134156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.564 [2024-12-06 13:06:28.134188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.564 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.564 [2024-12-06 13:06:28.149805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.564 [2024-12-06 13:06:28.149837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.564 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.564 [2024-12-06 13:06:28.160075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.564 [2024-12-06 13:06:28.160107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.564 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.564 [2024-12-06 13:06:28.175370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.564 [2024-12-06 13:06:28.175416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.564 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.564 [2024-12-06 13:06:28.190349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.564 [2024-12-06 13:06:28.190382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.564 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.206223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.206254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.216893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.216939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.232080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.232109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.248027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.248074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.265348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.265393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.281703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.281748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.297908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.297939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.315188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.315232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.331026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.331066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.347843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.347881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.363456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.363512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.380352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.380396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.396910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.396969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.413972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.414035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.429155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.429226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.444418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.444507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.824 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.824 [2024-12-06 13:06:28.461093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.824 [2024-12-06 13:06:28.461133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.088 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.478479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.478511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.494503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.494563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.511298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.511329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.528315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.528359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.543531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.543563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.559688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.559734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.576412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.576458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.593145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.593176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.610222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.610254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.626526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.626559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.643963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.643997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.659210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.659244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.669607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.669638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.684302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.684335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.694130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.694161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 11628.50 IOPS, 90.85 MiB/s [2024-12-06T13:06:28.729Z] [2024-12-06 13:06:28.708655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.708686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.089 [2024-12-06 13:06:28.719207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.089 [2024-12-06 13:06:28.719239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.089 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.352 [2024-12-06 13:06:28.733555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.352 [2024-12-06 13:06:28.733586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.352 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.352 [2024-12-06 13:06:28.745639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.352 [2024-12-06 13:06:28.745685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.352 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.352 [2024-12-06 13:06:28.763158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.352 [2024-12-06 13:06:28.763190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.352 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.352 [2024-12-06 13:06:28.779058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.352 [2024-12-06 13:06:28.779090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.352 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.352 [2024-12-06 13:06:28.795773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.352 [2024-12-06 13:06:28.795804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.352 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.352 [2024-12-06 13:06:28.811283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.352 [2024-12-06 13:06:28.811328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.352 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.352 [2024-12-06 13:06:28.821015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.352 [2024-12-06 13:06:28.821060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.352 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.352 [2024-12-06 13:06:28.835992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.352 [2024-12-06 13:06:28.836039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.352 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.352 [2024-12-06 13:06:28.852493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.352 [2024-12-06 13:06:28.852525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.352 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.352 [2024-12-06 13:06:28.869537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.352 [2024-12-06 13:06:28.869569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.353 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.353 [2024-12-06 13:06:28.886128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.353 [2024-12-06 13:06:28.886158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.353 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.353 [2024-12-06 13:06:28.901439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.353 [2024-12-06 13:06:28.901483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.353 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.353 [2024-12-06 13:06:28.917863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.353 [2024-12-06 13:06:28.917895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.353 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.353 [2024-12-06 13:06:28.934176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.353 [2024-12-06 13:06:28.934207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.353 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.353 [2024-12-06 13:06:28.950755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.353 [2024-12-06 13:06:28.950788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.353 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.353 [2024-12-06 13:06:28.966992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.353 [2024-12-06 13:06:28.967023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.353 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.353 [2024-12-06 13:06:28.985475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.353 [2024-12-06 13:06:28.985508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.353 2024/12/06 13:06:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.611 [2024-12-06 13:06:29.000025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.611 [2024-12-06 13:06:29.000057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.611 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.611 [2024-12-06 13:06:29.016498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.611 [2024-12-06 13:06:29.016527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.032715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.032746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.050162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.050193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.066508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.066538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.083788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.083819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.098749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.098780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.110780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.110812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.128184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.128224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.143413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.143463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.160953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.160988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.175840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.175874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.190927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.190959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.201227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.201257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.215733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.215767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.225730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.225769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.612 [2024-12-06 13:06:29.241323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.612 [2024-12-06 13:06:29.241356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.612 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.258789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.258820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.273310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.273342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.289526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.289557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.305945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.305978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.322617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.322647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.338527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.338558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.348292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.348322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.362820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.362851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.379259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.379290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.395340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.395371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.412925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.412957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.428481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.428510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.440138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.440169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.457243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.457275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.471761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.471798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.489316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.489535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.871 [2024-12-06 13:06:29.504620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.871 [2024-12-06 13:06:29.504656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.871 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.130 [2024-12-06 13:06:29.521025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.130 [2024-12-06 13:06:29.521064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.130 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.130 [2024-12-06 13:06:29.537632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.130 [2024-12-06 13:06:29.537672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.130 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.130 [2024-12-06 13:06:29.554217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.130 [2024-12-06 13:06:29.554257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.130 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.130 [2024-12-06 13:06:29.570797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.130 [2024-12-06 13:06:29.570844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.130 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.130 [2024-12-06 13:06:29.587706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.130 [2024-12-06 13:06:29.587932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.130 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.130 [2024-12-06 13:06:29.603126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.130 [2024-12-06 13:06:29.603313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.130 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.130 [2024-12-06 13:06:29.620113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.130 [2024-12-06 13:06:29.620158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.130 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.130 [2024-12-06 13:06:29.635632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.131 [2024-12-06 13:06:29.635685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.131 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.131 [2024-12-06 13:06:29.646159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.131 [2024-12-06 13:06:29.646332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.131 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.131 [2024-12-06 13:06:29.661217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.131 [2024-12-06 13:06:29.661365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.131 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.131 [2024-12-06 13:06:29.671654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.131 [2024-12-06 13:06:29.671689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.131 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.131 [2024-12-06 13:06:29.686554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.131 [2024-12-06 13:06:29.686592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.131 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.131 [2024-12-06 13:06:29.696905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.131 [2024-12-06 13:06:29.697064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.131 11728.00 IOPS, 91.62 MiB/s [2024-12-06T13:06:29.771Z] 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.131 [2024-12-06 13:06:29.711789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.131 [2024-12-06 13:06:29.711942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.131 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.131 [2024-12-06 13:06:29.721689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.131 [2024-12-06 13:06:29.721724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.131 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.131 [2024-12-06 13:06:29.736249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.131 [2024-12-06 13:06:29.736283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.131 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.131 [2024-12-06 13:06:29.752962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.131 [2024-12-06 13:06:29.752998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.131 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.131 [2024-12-06 13:06:29.768072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.131 [2024-12-06 13:06:29.768109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.783195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.783231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.798201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.798235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.813267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.813321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.829794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.829832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.847125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.847205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.862550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.862584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.874595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.874631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.892161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.892213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.906854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.906901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.922894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.922932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.933090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.933126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.947957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.947994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.958153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.958188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.971841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.971877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:29.988868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:29.988906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:30.003618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:30.003668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.390 [2024-12-06 13:06:30.018963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.390 [2024-12-06 13:06:30.019008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.390 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.030774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.030812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.048296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.048338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.063197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.063236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.078376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.078414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.095352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.095398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.110751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.110795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.127870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.127927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.143056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.143098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.153524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.153564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.168026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.168073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.178541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.178579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.192858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.192898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.205677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.205717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.222519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.222585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.237770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.237807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.253288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.253342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.649 [2024-12-06 13:06:30.270689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.649 [2024-12-06 13:06:30.270724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.649 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.287007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.287046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.302109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.302148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.317727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.317766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.335152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.335194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.351770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.351807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.368266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.368302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.383519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.383565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.400423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.400471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.416472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.416505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.433299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.433361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.450067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.450114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.465819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.465854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.476361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.476397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.491291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.491327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.508952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.508988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.524195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.524230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.909 [2024-12-06 13:06:30.533870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.909 [2024-12-06 13:06:30.534027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.909 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 [2024-12-06 13:06:30.549935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.549970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.168 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 [2024-12-06 13:06:30.566802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.566837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.168 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 [2024-12-06 13:06:30.583167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.583201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.168 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 [2024-12-06 13:06:30.599630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.599664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.168 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 [2024-12-06 13:06:30.615134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.615172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.168 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 [2024-12-06 13:06:30.627429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.627495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.168 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 [2024-12-06 13:06:30.644195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.644230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.168 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 [2024-12-06 13:06:30.659586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.659620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.168 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 [2024-12-06 13:06:30.671702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.671739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.168 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 [2024-12-06 13:06:30.689161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.689314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.168 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 11762.50 IOPS, 91.89 MiB/s [2024-12-06T13:06:30.808Z] [2024-12-06 13:06:30.704543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.704578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.168 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 [2024-12-06 13:06:30.714531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.714566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.168 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 [2024-12-06 13:06:30.729246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.729282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.168 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.168 [2024-12-06 13:06:30.745497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.168 [2024-12-06 13:06:30.745534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.169 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.169 [2024-12-06 13:06:30.762163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.169 [2024-12-06 13:06:30.762199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.169 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.169 [2024-12-06 13:06:30.777561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.169 [2024-12-06 13:06:30.777596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.169 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.169 [2024-12-06 13:06:30.792764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.169 [2024-12-06 13:06:30.792800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.169 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.169 [2024-12-06 13:06:30.802849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.169 [2024-12-06 13:06:30.802884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.169 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.427 [2024-12-06 13:06:30.817980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.427 [2024-12-06 13:06:30.818027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.427 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.427 [2024-12-06 13:06:30.835707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.427 [2024-12-06 13:06:30.835742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.427 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.427 [2024-12-06 13:06:30.851555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.427 [2024-12-06 13:06:30.851592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.427 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.427 [2024-12-06 13:06:30.869940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.427 [2024-12-06 13:06:30.869978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.427 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.427 [2024-12-06 13:06:30.885084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.427 [2024-12-06 13:06:30.885121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.428 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.428 [2024-12-06 13:06:30.903290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.428 [2024-12-06 13:06:30.903463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.428 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.428 [2024-12-06 13:06:30.918405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.428 [2024-12-06 13:06:30.918498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.428 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.428 [2024-12-06 13:06:30.934253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.428 [2024-12-06 13:06:30.934289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.428 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.428 [2024-12-06 13:06:30.952420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.428 [2024-12-06 13:06:30.952483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.428 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.428 [2024-12-06 13:06:30.967350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.428 [2024-12-06 13:06:30.967516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.428 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.428 [2024-12-06 13:06:30.981494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.428 [2024-12-06 13:06:30.981535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.428 2024/12/06 13:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.428 [2024-12-06 13:06:30.998779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.428 [2024-12-06 13:06:30.998951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.428 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.428 [2024-12-06 13:06:31.013919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.428 [2024-12-06 13:06:31.014102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.428 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.428 [2024-12-06 13:06:31.024544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.428 [2024-12-06 13:06:31.024581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.428 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.428 [2024-12-06 13:06:31.039059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.428 [2024-12-06 13:06:31.039101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.428 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.428 [2024-12-06 13:06:31.056578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.428 [2024-12-06 13:06:31.056620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.428 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.687 [2024-12-06 13:06:31.071621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.687 [2024-12-06 13:06:31.071661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.687 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.687 [2024-12-06 13:06:31.081177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.687 [2024-12-06 13:06:31.081215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.687 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.687 [2024-12-06 13:06:31.095824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.687 [2024-12-06 13:06:31.095861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.687 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.687 [2024-12-06 13:06:31.108180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.687 [2024-12-06 13:06:31.108231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.687 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.687 [2024-12-06 13:06:31.126318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.687 [2024-12-06 13:06:31.126354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.687 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.687 [2024-12-06 13:06:31.140687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.687 [2024-12-06 13:06:31.140725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.687 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.687 [2024-12-06 13:06:31.159565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.687 [2024-12-06 13:06:31.159611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.688 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.688 [2024-12-06 13:06:31.173947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.688 [2024-12-06 13:06:31.173991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.688 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.688 [2024-12-06 13:06:31.183909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.688 [2024-12-06 13:06:31.183948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.688 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.688 [2024-12-06 13:06:31.198318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.688 [2024-12-06 13:06:31.198360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.688 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.688 [2024-12-06 13:06:31.208684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.688 [2024-12-06 13:06:31.208729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.688 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.688 [2024-12-06 13:06:31.223214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.688 [2024-12-06 13:06:31.223382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.688 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.688 [2024-12-06 13:06:31.233595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.688 [2024-12-06 13:06:31.233631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.688 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.688 [2024-12-06 13:06:31.248054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.688 [2024-12-06 13:06:31.248091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.688 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.688 [2024-12-06 13:06:31.258432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.688 [2024-12-06 13:06:31.258484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.688 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.688 [2024-12-06 13:06:31.272962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.688 [2024-12-06 13:06:31.272997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.688 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.688 [2024-12-06 13:06:31.291229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.688 [2024-12-06 13:06:31.291387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.688 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.688 [2024-12-06 13:06:31.306010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.688 [2024-12-06 13:06:31.306047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.688 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.688 [2024-12-06 13:06:31.316633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.688 [2024-12-06 13:06:31.316668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.688 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.331236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.331422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.349121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.349160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.364757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.364795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.376604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.376670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.394716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.394782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.410954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.410990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.426919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.426968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.444744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.444779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.461510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.461599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.478320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.478356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.494381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.494417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.504451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.504508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.519756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.519790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.947 [2024-12-06 13:06:31.535796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.947 [2024-12-06 13:06:31.535829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.947 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.948 [2024-12-06 13:06:31.551201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.948 [2024-12-06 13:06:31.551250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.948 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.948 [2024-12-06 13:06:31.566772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.948 [2024-12-06 13:06:31.566804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.948 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.948 [2024-12-06 13:06:31.583377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.948 [2024-12-06 13:06:31.583426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.599540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.599574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.615973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.616009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.633219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.633282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.648868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.648910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.659358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.659393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.674155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.674193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.690603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.690651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 11744.20 IOPS, 91.75 MiB/s [2024-12-06T13:06:31.847Z] [2024-12-06 13:06:31.704979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.705030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 00:12:43.207 Latency(us) 00:12:43.207 [2024-12-06T13:06:31.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.207 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:43.207 Nvme1n1 : 5.01 11743.49 91.75 0.00 0.00 10883.41 4706.68 20733.21 00:12:43.207 [2024-12-06T13:06:31.847Z] =================================================================================================================== 00:12:43.207 [2024-12-06T13:06:31.847Z] Total : 11743.49 91.75 0.00 0.00 10883.41 4706.68 20733.21 00:12:43.207 [2024-12-06 13:06:31.714676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.714706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.726668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.726697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.738704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.738770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.750740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.750772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.762725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.762756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.774717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.774767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.786761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.786802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.798769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.207 [2024-12-06 13:06:31.798808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.207 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.207 [2024-12-06 13:06:31.810772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.208 [2024-12-06 13:06:31.810806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.208 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.208 [2024-12-06 13:06:31.822767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.208 [2024-12-06 13:06:31.822804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.208 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.208 [2024-12-06 13:06:31.834774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.208 [2024-12-06 13:06:31.834813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.208 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.466 [2024-12-06 13:06:31.846775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.466 [2024-12-06 13:06:31.846813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.466 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.466 [2024-12-06 13:06:31.858773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.466 [2024-12-06 13:06:31.858809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.466 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.466 [2024-12-06 13:06:31.870778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.466 [2024-12-06 13:06:31.870806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.466 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.466 [2024-12-06 13:06:31.882780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.466 [2024-12-06 13:06:31.882811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.466 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.466 [2024-12-06 13:06:31.894787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.466 [2024-12-06 13:06:31.894821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.466 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.466 [2024-12-06 13:06:31.906771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.466 [2024-12-06 13:06:31.906799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.466 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.466 [2024-12-06 13:06:31.918764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.466 [2024-12-06 13:06:31.918791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.466 2024/12/06 13:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:43.466 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69270) - No such process 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69270 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.466 delay0 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.466 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:12:43.724 [2024-12-06 13:06:32.136204] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:50.283 Initializing NVMe Controllers 00:12:50.283 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:50.283 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:50.283 Initialization complete. Launching workers. 00:12:50.283 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 78 00:12:50.283 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 365, failed to submit 33 00:12:50.283 success 162, unsuccessful 203, failed 0 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:50.283 rmmod nvme_tcp 00:12:50.283 rmmod nvme_fabrics 00:12:50.283 rmmod nvme_keyring 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 69102 ']' 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 69102 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 69102 ']' 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 69102 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69102 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:50.283 killing process with pid 69102 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69102' 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 69102 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 69102 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:12:50.283 00:12:50.283 real 0m25.067s 00:12:50.283 user 0m40.562s 00:12:50.283 sys 0m6.616s 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.283 ************************************ 00:12:50.283 END TEST nvmf_zcopy 00:12:50.283 ************************************ 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:50.283 ************************************ 00:12:50.283 START TEST nvmf_nmic 00:12:50.283 ************************************ 00:12:50.283 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:50.283 * Looking for test storage... 00:12:50.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:50.543 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:50.543 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:50.543 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:50.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.543 --rc genhtml_branch_coverage=1 00:12:50.543 --rc genhtml_function_coverage=1 00:12:50.543 --rc genhtml_legend=1 00:12:50.543 --rc geninfo_all_blocks=1 00:12:50.543 --rc geninfo_unexecuted_blocks=1 00:12:50.543 00:12:50.543 ' 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:50.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.543 --rc genhtml_branch_coverage=1 00:12:50.543 --rc genhtml_function_coverage=1 00:12:50.543 --rc genhtml_legend=1 00:12:50.543 --rc geninfo_all_blocks=1 00:12:50.543 --rc geninfo_unexecuted_blocks=1 00:12:50.543 00:12:50.543 ' 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:50.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.543 --rc genhtml_branch_coverage=1 00:12:50.543 --rc genhtml_function_coverage=1 00:12:50.543 --rc genhtml_legend=1 00:12:50.543 --rc geninfo_all_blocks=1 00:12:50.543 --rc geninfo_unexecuted_blocks=1 00:12:50.543 00:12:50.543 ' 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:50.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.543 --rc genhtml_branch_coverage=1 00:12:50.543 --rc genhtml_function_coverage=1 00:12:50.543 --rc genhtml_legend=1 00:12:50.543 --rc geninfo_all_blocks=1 00:12:50.543 --rc geninfo_unexecuted_blocks=1 00:12:50.543 00:12:50.543 ' 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:50.543 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.544 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:50.544 Cannot find device "nvmf_init_br" 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:50.544 Cannot find device "nvmf_init_br2" 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:50.544 Cannot find device "nvmf_tgt_br" 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:50.544 Cannot find device "nvmf_tgt_br2" 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:50.544 Cannot find device "nvmf_init_br" 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:50.544 Cannot find device "nvmf_init_br2" 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:50.544 Cannot find device "nvmf_tgt_br" 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:50.544 Cannot find device "nvmf_tgt_br2" 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:50.544 Cannot find device "nvmf_br" 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:12:50.544 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:50.803 Cannot find device "nvmf_init_if" 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:50.803 Cannot find device "nvmf_init_if2" 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:50.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:50.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:50.803 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:50.803 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:12:50.803 00:12:50.803 --- 10.0.0.3 ping statistics --- 00:12:50.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.803 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:12:50.803 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:51.062 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:51.062 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:12:51.062 00:12:51.062 --- 10.0.0.4 ping statistics --- 00:12:51.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.062 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:51.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:51.062 00:12:51.062 --- 10.0.0.1 ping statistics --- 00:12:51.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.062 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:51.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:12:51.062 00:12:51.062 --- 10.0.0.2 ping statistics --- 00:12:51.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.062 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=69650 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 69650 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 69650 ']' 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.062 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.062 [2024-12-06 13:06:39.544920] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:51.062 [2024-12-06 13:06:39.545044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.062 [2024-12-06 13:06:39.699479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.321 [2024-12-06 13:06:39.766950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.321 [2024-12-06 13:06:39.767010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.321 [2024-12-06 13:06:39.767022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.321 [2024-12-06 13:06:39.767030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.321 [2024-12-06 13:06:39.767037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.321 [2024-12-06 13:06:39.768137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.321 [2024-12-06 13:06:39.768282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.321 [2024-12-06 13:06:39.768286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.321 [2024-12-06 13:06:39.768192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.321 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.321 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:12:51.321 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.321 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.321 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.321 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.321 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:51.321 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.321 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.321 [2024-12-06 13:06:39.945060] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.321 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.321 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:51.321 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.321 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.579 Malloc0 00:12:51.579 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.579 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:51.579 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.579 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.579 [2024-12-06 13:06:40.017498] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.579 test case1: single bdev can't be used in multiple subsystems 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.579 [2024-12-06 13:06:40.045355] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:51.579 [2024-12-06 13:06:40.045414] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:51.579 [2024-12-06 13:06:40.045427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.579 2024/12/06 13:06:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:51.579 request: 00:12:51.579 { 00:12:51.579 "method": "nvmf_subsystem_add_ns", 00:12:51.579 "params": { 00:12:51.579 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:51.579 "namespace": { 00:12:51.579 "bdev_name": "Malloc0", 00:12:51.579 "no_auto_visible": false, 00:12:51.579 "hide_metadata": false 00:12:51.579 } 00:12:51.579 } 00:12:51.579 } 00:12:51.579 Got JSON-RPC error response 00:12:51.579 GoRPCClient: error on JSON-RPC call 00:12:51.579 Adding namespace failed - expected result. 00:12:51.579 test case2: host connect to nvmf target in multiple paths 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.579 [2024-12-06 13:06:40.057595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.579 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:51.837 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:12:51.837 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.837 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:12:51.837 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.837 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:51.837 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:12:54.383 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:54.383 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:54.383 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.383 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:54.383 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.383 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:12:54.383 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:54.383 [global] 00:12:54.383 thread=1 00:12:54.383 invalidate=1 00:12:54.383 rw=write 00:12:54.383 time_based=1 00:12:54.383 runtime=1 00:12:54.383 ioengine=libaio 00:12:54.383 direct=1 00:12:54.383 bs=4096 00:12:54.383 iodepth=1 00:12:54.383 norandommap=0 00:12:54.383 numjobs=1 00:12:54.383 00:12:54.383 verify_dump=1 00:12:54.383 verify_backlog=512 00:12:54.383 verify_state_save=0 00:12:54.383 do_verify=1 00:12:54.383 verify=crc32c-intel 00:12:54.383 [job0] 00:12:54.383 filename=/dev/nvme0n1 00:12:54.383 Could not set queue depth (nvme0n1) 00:12:54.383 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:54.383 fio-3.35 00:12:54.383 Starting 1 thread 00:12:55.316 00:12:55.316 job0: (groupid=0, jobs=1): err= 0: pid=69742: Fri Dec 6 13:06:43 2024 00:12:55.316 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:12:55.316 slat (nsec): min=13429, max=57834, avg=16650.22, stdev=4375.17 00:12:55.316 clat (usec): min=132, max=320, avg=154.24, stdev=18.69 00:12:55.316 lat (usec): min=146, max=338, avg=170.89, stdev=18.96 00:12:55.316 clat percentiles (usec): 00:12:55.316 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143], 00:12:55.316 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:12:55.316 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 172], 95.00th=[ 200], 00:12:55.316 | 99.00th=[ 231], 99.50th=[ 241], 99.90th=[ 258], 99.95th=[ 277], 00:12:55.316 | 99.99th=[ 322] 00:12:55.316 write: IOPS=3426, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1001msec); 0 zone resets 00:12:55.316 slat (nsec): min=18333, max=95724, avg=24544.34, stdev=7392.43 00:12:55.316 clat (usec): min=85, max=652, avg=110.33, stdev=17.86 00:12:55.316 lat (usec): min=114, max=694, avg=134.88, stdev=20.55 00:12:55.316 clat percentiles (usec): 00:12:55.316 | 1.00th=[ 97], 5.00th=[ 99], 10.00th=[ 101], 20.00th=[ 103], 00:12:55.316 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 108], 60.00th=[ 110], 00:12:55.316 | 70.00th=[ 112], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 126], 00:12:55.316 | 99.00th=[ 147], 99.50th=[ 208], 99.90th=[ 318], 99.95th=[ 510], 00:12:55.316 | 99.99th=[ 652] 00:12:55.316 bw ( KiB/s): min=13488, max=13488, per=98.41%, avg=13488.00, stdev= 0.00, samples=1 00:12:55.316 iops : min= 3372, max= 3372, avg=3372.00, stdev= 0.00, samples=1 00:12:55.316 lat (usec) : 100=3.88%, 250=95.88%, 500=0.22%, 750=0.03% 00:12:55.316 cpu : usr=2.10%, sys=10.90%, ctx=6502, majf=0, minf=5 00:12:55.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:55.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.316 issued rwts: total=3072,3430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:55.316 00:12:55.316 Run status group 0 (all jobs): 00:12:55.316 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:12:55.316 WRITE: bw=13.4MiB/s (14.0MB/s), 13.4MiB/s-13.4MiB/s (14.0MB/s-14.0MB/s), io=13.4MiB (14.0MB), run=1001-1001msec 00:12:55.316 00:12:55.316 Disk stats (read/write): 00:12:55.316 nvme0n1: ios=2808/3072, merge=0/0, ticks=475/365, in_queue=840, util=91.38% 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:55.316 rmmod nvme_tcp 00:12:55.316 rmmod nvme_fabrics 00:12:55.316 rmmod nvme_keyring 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 69650 ']' 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 69650 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 69650 ']' 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 69650 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.316 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69650 00:12:55.575 killing process with pid 69650 00:12:55.575 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.575 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.575 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69650' 00:12:55.575 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 69650 00:12:55.575 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 69650 00:12:55.575 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:55.575 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:55.575 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:55.575 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:55.575 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:55.575 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:55.575 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:55.575 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.575 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:55.575 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:12:55.834 00:12:55.834 real 0m5.620s 00:12:55.834 user 0m17.237s 00:12:55.834 sys 0m1.552s 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.834 ************************************ 00:12:55.834 END TEST nvmf_nmic 00:12:55.834 ************************************ 00:12:55.834 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:56.094 ************************************ 00:12:56.094 START TEST nvmf_fio_target 00:12:56.094 ************************************ 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:56.094 * Looking for test storage... 00:12:56.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:56.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.094 --rc genhtml_branch_coverage=1 00:12:56.094 --rc genhtml_function_coverage=1 00:12:56.094 --rc genhtml_legend=1 00:12:56.094 --rc geninfo_all_blocks=1 00:12:56.094 --rc geninfo_unexecuted_blocks=1 00:12:56.094 00:12:56.094 ' 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:56.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.094 --rc genhtml_branch_coverage=1 00:12:56.094 --rc genhtml_function_coverage=1 00:12:56.094 --rc genhtml_legend=1 00:12:56.094 --rc geninfo_all_blocks=1 00:12:56.094 --rc geninfo_unexecuted_blocks=1 00:12:56.094 00:12:56.094 ' 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:56.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.094 --rc genhtml_branch_coverage=1 00:12:56.094 --rc genhtml_function_coverage=1 00:12:56.094 --rc genhtml_legend=1 00:12:56.094 --rc geninfo_all_blocks=1 00:12:56.094 --rc geninfo_unexecuted_blocks=1 00:12:56.094 00:12:56.094 ' 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:56.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.094 --rc genhtml_branch_coverage=1 00:12:56.094 --rc genhtml_function_coverage=1 00:12:56.094 --rc genhtml_legend=1 00:12:56.094 --rc geninfo_all_blocks=1 00:12:56.094 --rc geninfo_unexecuted_blocks=1 00:12:56.094 00:12:56.094 ' 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:56.094 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:56.095 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:56.095 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:56.095 Cannot find device "nvmf_init_br" 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:56.354 Cannot find device "nvmf_init_br2" 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:56.354 Cannot find device "nvmf_tgt_br" 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:56.354 Cannot find device "nvmf_tgt_br2" 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:56.354 Cannot find device "nvmf_init_br" 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:56.354 Cannot find device "nvmf_init_br2" 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:56.354 Cannot find device "nvmf_tgt_br" 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:56.354 Cannot find device "nvmf_tgt_br2" 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:56.354 Cannot find device "nvmf_br" 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:56.354 Cannot find device "nvmf_init_if" 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:56.354 Cannot find device "nvmf_init_if2" 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:56.354 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:56.612 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:56.612 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:56.612 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:12:56.612 00:12:56.612 --- 10.0.0.3 ping statistics --- 00:12:56.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.612 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:56.612 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:56.612 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:12:56.612 00:12:56.612 --- 10.0.0.4 ping statistics --- 00:12:56.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.612 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:56.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:56.612 00:12:56.612 --- 10.0.0.1 ping statistics --- 00:12:56.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.612 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:56.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:12:56.612 00:12:56.612 --- 10.0.0.2 ping statistics --- 00:12:56.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.612 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=69980 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:56.612 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 69980 00:12:56.613 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 69980 ']' 00:12:56.613 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.613 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.613 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.613 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.613 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.613 [2024-12-06 13:06:45.183710] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:56.613 [2024-12-06 13:06:45.183810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.870 [2024-12-06 13:06:45.335331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.870 [2024-12-06 13:06:45.412334] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.870 [2024-12-06 13:06:45.412410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.870 [2024-12-06 13:06:45.412434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.870 [2024-12-06 13:06:45.412445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.870 [2024-12-06 13:06:45.412480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.870 [2024-12-06 13:06:45.413767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.870 [2024-12-06 13:06:45.413865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.870 [2024-12-06 13:06:45.413994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.870 [2024-12-06 13:06:45.413998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.802 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.802 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:57.802 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:57.802 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.802 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.803 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.803 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:58.064 [2024-12-06 13:06:46.535582] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.064 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:58.328 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:58.328 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:58.893 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:58.893 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:58.893 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:58.893 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:59.456 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:59.456 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:59.456 13:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:00.020 13:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:00.020 13:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:00.277 13:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:00.277 13:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:00.533 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:00.533 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:00.789 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:01.355 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:01.355 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:01.355 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:01.355 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.614 13:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:01.872 [2024-12-06 13:06:50.485202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:01.872 13:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:02.438 13:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:02.438 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:02.696 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:02.696 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:13:02.696 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.696 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:13:02.696 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:13:02.696 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:13:04.603 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:04.603 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:04.603 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.603 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:13:04.603 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.603 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:13:04.603 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:04.860 [global] 00:13:04.860 thread=1 00:13:04.860 invalidate=1 00:13:04.860 rw=write 00:13:04.860 time_based=1 00:13:04.860 runtime=1 00:13:04.860 ioengine=libaio 00:13:04.860 direct=1 00:13:04.861 bs=4096 00:13:04.861 iodepth=1 00:13:04.861 norandommap=0 00:13:04.861 numjobs=1 00:13:04.861 00:13:04.861 verify_dump=1 00:13:04.861 verify_backlog=512 00:13:04.861 verify_state_save=0 00:13:04.861 do_verify=1 00:13:04.861 verify=crc32c-intel 00:13:04.861 [job0] 00:13:04.861 filename=/dev/nvme0n1 00:13:04.861 [job1] 00:13:04.861 filename=/dev/nvme0n2 00:13:04.861 [job2] 00:13:04.861 filename=/dev/nvme0n3 00:13:04.861 [job3] 00:13:04.861 filename=/dev/nvme0n4 00:13:04.861 Could not set queue depth (nvme0n1) 00:13:04.861 Could not set queue depth (nvme0n2) 00:13:04.861 Could not set queue depth (nvme0n3) 00:13:04.861 Could not set queue depth (nvme0n4) 00:13:04.861 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:04.861 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:04.861 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:04.861 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:04.861 fio-3.35 00:13:04.861 Starting 4 threads 00:13:06.233 00:13:06.233 job0: (groupid=0, jobs=1): err= 0: pid=70278: Fri Dec 6 13:06:54 2024 00:13:06.233 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:06.233 slat (nsec): min=7188, max=57758, avg=16988.09, stdev=8690.41 00:13:06.233 clat (usec): min=284, max=743, avg=335.58, stdev=36.92 00:13:06.233 lat (usec): min=299, max=785, avg=352.57, stdev=38.63 00:13:06.233 clat percentiles (usec): 00:13:06.233 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 314], 00:13:06.233 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:13:06.233 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 367], 95.00th=[ 400], 00:13:06.233 | 99.00th=[ 490], 99.50th=[ 515], 99.90th=[ 676], 99.95th=[ 742], 00:13:06.233 | 99.99th=[ 742] 00:13:06.233 write: IOPS=1718, BW=6873KiB/s (7038kB/s)(6880KiB/1001msec); 0 zone resets 00:13:06.233 slat (usec): min=9, max=118, avg=21.70, stdev= 8.69 00:13:06.233 clat (usec): min=109, max=7314, avg=241.46, stdev=250.06 00:13:06.233 lat (usec): min=132, max=7349, avg=263.17, stdev=249.96 00:13:06.233 clat percentiles (usec): 00:13:06.233 | 1.00th=[ 116], 5.00th=[ 122], 10.00th=[ 127], 20.00th=[ 135], 00:13:06.233 | 30.00th=[ 200], 40.00th=[ 249], 50.00th=[ 260], 60.00th=[ 265], 00:13:06.233 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:13:06.233 | 99.00th=[ 355], 99.50th=[ 494], 99.90th=[ 5669], 99.95th=[ 7308], 00:13:06.233 | 99.99th=[ 7308] 00:13:06.233 bw ( KiB/s): min= 8192, max= 8192, per=25.43%, avg=8192.00, stdev= 0.00, samples=1 00:13:06.233 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:06.233 lat (usec) : 250=21.50%, 500=77.92%, 750=0.37%, 1000=0.06% 00:13:06.233 lat (msec) : 2=0.03%, 4=0.06%, 10=0.06% 00:13:06.233 cpu : usr=1.00%, sys=5.50%, ctx=3514, majf=0, minf=11 00:13:06.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.233 issued rwts: total=1536,1720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.233 job1: (groupid=0, jobs=1): err= 0: pid=70279: Fri Dec 6 13:06:54 2024 00:13:06.233 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:06.233 slat (nsec): min=7040, max=91659, avg=14669.48, stdev=10150.27 00:13:06.233 clat (usec): min=218, max=716, avg=334.10, stdev=33.47 00:13:06.233 lat (usec): min=231, max=724, avg=348.77, stdev=33.91 00:13:06.233 clat percentiles (usec): 00:13:06.233 | 1.00th=[ 285], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 314], 00:13:06.233 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:13:06.233 | 70.00th=[ 343], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 388], 00:13:06.233 | 99.00th=[ 474], 99.50th=[ 515], 99.90th=[ 709], 99.95th=[ 717], 00:13:06.233 | 99.99th=[ 717] 00:13:06.233 write: IOPS=1633, BW=6533KiB/s (6690kB/s)(6540KiB/1001msec); 0 zone resets 00:13:06.233 slat (usec): min=10, max=121, avg=21.11, stdev= 8.91 00:13:06.233 clat (usec): min=109, max=2026, avg=259.91, stdev=63.32 00:13:06.233 lat (usec): min=128, max=2049, avg=281.02, stdev=62.61 00:13:06.233 clat percentiles (usec): 00:13:06.233 | 1.00th=[ 153], 5.00th=[ 196], 10.00th=[ 208], 20.00th=[ 227], 00:13:06.233 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 269], 00:13:06.233 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 322], 00:13:06.233 | 99.00th=[ 383], 99.50th=[ 420], 99.90th=[ 947], 99.95th=[ 2024], 00:13:06.233 | 99.99th=[ 2024] 00:13:06.233 bw ( KiB/s): min= 8192, max= 8192, per=25.43%, avg=8192.00, stdev= 0.00, samples=1 00:13:06.233 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:06.233 lat (usec) : 250=19.02%, 500=80.54%, 750=0.38%, 1000=0.03% 00:13:06.233 lat (msec) : 4=0.03% 00:13:06.233 cpu : usr=1.30%, sys=4.30%, ctx=3439, majf=0, minf=12 00:13:06.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.233 issued rwts: total=1536,1635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.233 job2: (groupid=0, jobs=1): err= 0: pid=70280: Fri Dec 6 13:06:54 2024 00:13:06.233 read: IOPS=2792, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1000msec) 00:13:06.233 slat (nsec): min=12331, max=41202, avg=14699.55, stdev=2631.50 00:13:06.233 clat (usec): min=147, max=903, avg=171.78, stdev=28.89 00:13:06.233 lat (usec): min=161, max=930, avg=186.48, stdev=29.69 00:13:06.233 clat percentiles (usec): 00:13:06.233 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 161], 00:13:06.233 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:13:06.233 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:13:06.233 | 99.00th=[ 227], 99.50th=[ 281], 99.90th=[ 824], 99.95th=[ 881], 00:13:06.233 | 99.99th=[ 906] 00:13:06.233 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:13:06.233 slat (usec): min=14, max=126, avg=21.29, stdev= 4.48 00:13:06.233 clat (usec): min=91, max=332, avg=131.81, stdev=11.92 00:13:06.233 lat (usec): min=129, max=382, avg=153.10, stdev=13.74 00:13:06.233 clat percentiles (usec): 00:13:06.233 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 121], 20.00th=[ 124], 00:13:06.233 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:13:06.233 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 153], 00:13:06.233 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 212], 99.95th=[ 293], 00:13:06.233 | 99.99th=[ 334] 00:13:06.233 bw ( KiB/s): min=12288, max=12288, per=38.14%, avg=12288.00, stdev= 0.00, samples=1 00:13:06.233 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:06.233 lat (usec) : 100=0.02%, 250=99.64%, 500=0.29%, 1000=0.05% 00:13:06.233 cpu : usr=1.90%, sys=8.10%, ctx=5865, majf=0, minf=7 00:13:06.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.233 issued rwts: total=2792,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.233 job3: (groupid=0, jobs=1): err= 0: pid=70281: Fri Dec 6 13:06:54 2024 00:13:06.233 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:06.233 slat (nsec): min=7980, max=57639, avg=14204.89, stdev=5816.77 00:13:06.233 clat (usec): min=225, max=908, avg=334.71, stdev=30.64 00:13:06.233 lat (usec): min=239, max=922, avg=348.91, stdev=31.32 00:13:06.233 clat percentiles (usec): 00:13:06.233 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:13:06.233 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:13:06.233 | 70.00th=[ 343], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 383], 00:13:06.233 | 99.00th=[ 441], 99.50th=[ 490], 99.90th=[ 553], 99.95th=[ 906], 00:13:06.233 | 99.99th=[ 906] 00:13:06.233 write: IOPS=1634, BW=6537KiB/s (6694kB/s)(6544KiB/1001msec); 0 zone resets 00:13:06.233 slat (usec): min=11, max=128, avg=21.88, stdev= 8.69 00:13:06.233 clat (usec): min=99, max=1963, avg=258.53, stdev=62.31 00:13:06.233 lat (usec): min=121, max=1986, avg=280.41, stdev=61.52 00:13:06.233 clat percentiles (usec): 00:13:06.233 | 1.00th=[ 153], 5.00th=[ 194], 10.00th=[ 206], 20.00th=[ 225], 00:13:06.233 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 269], 00:13:06.233 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 314], 00:13:06.233 | 99.00th=[ 375], 99.50th=[ 445], 99.90th=[ 938], 99.95th=[ 1958], 00:13:06.233 | 99.99th=[ 1958] 00:13:06.233 bw ( KiB/s): min= 8192, max= 8192, per=25.43%, avg=8192.00, stdev= 0.00, samples=1 00:13:06.233 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:06.233 lat (usec) : 100=0.03%, 250=18.85%, 500=80.77%, 750=0.25%, 1000=0.06% 00:13:06.233 lat (msec) : 2=0.03% 00:13:06.233 cpu : usr=1.40%, sys=4.50%, ctx=3240, majf=0, minf=7 00:13:06.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.233 issued rwts: total=1536,1636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.234 00:13:06.234 Run status group 0 (all jobs): 00:13:06.234 READ: bw=28.9MiB/s (30.3MB/s), 6138KiB/s-10.9MiB/s (6285kB/s-11.4MB/s), io=28.9MiB (30.3MB), run=1000-1001msec 00:13:06.234 WRITE: bw=31.5MiB/s (33.0MB/s), 6533KiB/s-12.0MiB/s (6690kB/s-12.6MB/s), io=31.5MiB (33.0MB), run=1000-1001msec 00:13:06.234 00:13:06.234 Disk stats (read/write): 00:13:06.234 nvme0n1: ios=1298/1536, merge=0/0, ticks=443/359, in_queue=802, util=86.56% 00:13:06.234 nvme0n2: ios=1227/1536, merge=0/0, ticks=416/389, in_queue=805, util=87.91% 00:13:06.234 nvme0n3: ios=2436/2560, merge=0/0, ticks=420/355, in_queue=775, util=89.13% 00:13:06.234 nvme0n4: ios=1191/1536, merge=0/0, ticks=387/399, in_queue=786, util=89.70% 00:13:06.234 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:06.234 [global] 00:13:06.234 thread=1 00:13:06.234 invalidate=1 00:13:06.234 rw=randwrite 00:13:06.234 time_based=1 00:13:06.234 runtime=1 00:13:06.234 ioengine=libaio 00:13:06.234 direct=1 00:13:06.234 bs=4096 00:13:06.234 iodepth=1 00:13:06.234 norandommap=0 00:13:06.234 numjobs=1 00:13:06.234 00:13:06.234 verify_dump=1 00:13:06.234 verify_backlog=512 00:13:06.234 verify_state_save=0 00:13:06.234 do_verify=1 00:13:06.234 verify=crc32c-intel 00:13:06.234 [job0] 00:13:06.234 filename=/dev/nvme0n1 00:13:06.234 [job1] 00:13:06.234 filename=/dev/nvme0n2 00:13:06.234 [job2] 00:13:06.234 filename=/dev/nvme0n3 00:13:06.234 [job3] 00:13:06.234 filename=/dev/nvme0n4 00:13:06.234 Could not set queue depth (nvme0n1) 00:13:06.234 Could not set queue depth (nvme0n2) 00:13:06.234 Could not set queue depth (nvme0n3) 00:13:06.234 Could not set queue depth (nvme0n4) 00:13:06.234 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.234 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.234 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.234 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.234 fio-3.35 00:13:06.234 Starting 4 threads 00:13:07.634 00:13:07.634 job0: (groupid=0, jobs=1): err= 0: pid=70340: Fri Dec 6 13:06:55 2024 00:13:07.634 read: IOPS=1296, BW=5187KiB/s (5311kB/s)(5192KiB/1001msec) 00:13:07.634 slat (nsec): min=8396, max=73200, avg=14539.96, stdev=3692.38 00:13:07.634 clat (usec): min=170, max=40477, avg=411.20, stdev=1114.33 00:13:07.634 lat (usec): min=183, max=40494, avg=425.74, stdev=1114.38 00:13:07.634 clat percentiles (usec): 00:13:07.634 | 1.00th=[ 255], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 338], 00:13:07.634 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 371], 00:13:07.634 | 70.00th=[ 429], 80.00th=[ 445], 90.00th=[ 457], 95.00th=[ 469], 00:13:07.634 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[ 619], 99.95th=[40633], 00:13:07.634 | 99.99th=[40633] 00:13:07.634 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:07.634 slat (usec): min=10, max=118, avg=20.51, stdev= 5.42 00:13:07.634 clat (usec): min=104, max=725, avg=267.60, stdev=80.30 00:13:07.634 lat (usec): min=125, max=743, avg=288.11, stdev=80.17 00:13:07.634 clat percentiles (usec): 00:13:07.634 | 1.00th=[ 110], 5.00th=[ 117], 10.00th=[ 126], 20.00th=[ 212], 00:13:07.634 | 30.00th=[ 255], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:13:07.634 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 371], 95.00th=[ 388], 00:13:07.634 | 99.00th=[ 424], 99.50th=[ 510], 99.90th=[ 676], 99.95th=[ 725], 00:13:07.634 | 99.99th=[ 725] 00:13:07.634 bw ( KiB/s): min= 6808, max= 6808, per=22.18%, avg=6808.00, stdev= 0.00, samples=1 00:13:07.634 iops : min= 1702, max= 1702, avg=1702.00, stdev= 0.00, samples=1 00:13:07.634 lat (usec) : 250=15.14%, 500=84.16%, 750=0.67% 00:13:07.634 lat (msec) : 50=0.04% 00:13:07.634 cpu : usr=0.80%, sys=4.50%, ctx=2836, majf=0, minf=19 00:13:07.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.634 issued rwts: total=1298,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:07.635 job1: (groupid=0, jobs=1): err= 0: pid=70341: Fri Dec 6 13:06:55 2024 00:13:07.635 read: IOPS=1482, BW=5930KiB/s (6072kB/s)(5936KiB/1001msec) 00:13:07.635 slat (nsec): min=12570, max=70943, avg=22606.22, stdev=8609.94 00:13:07.635 clat (usec): min=135, max=996, avg=368.77, stdev=107.70 00:13:07.635 lat (usec): min=152, max=1054, avg=391.38, stdev=112.54 00:13:07.635 clat percentiles (usec): 00:13:07.635 | 1.00th=[ 143], 5.00th=[ 157], 10.00th=[ 210], 20.00th=[ 330], 00:13:07.635 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 359], 00:13:07.635 | 70.00th=[ 392], 80.00th=[ 486], 90.00th=[ 510], 95.00th=[ 529], 00:13:07.635 | 99.00th=[ 594], 99.50th=[ 701], 99.90th=[ 816], 99.95th=[ 996], 00:13:07.635 | 99.99th=[ 996] 00:13:07.635 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:07.635 slat (usec): min=17, max=109, avg=28.33, stdev= 8.36 00:13:07.635 clat (usec): min=139, max=1411, avg=239.86, stdev=49.39 00:13:07.635 lat (usec): min=184, max=1443, avg=268.19, stdev=53.15 00:13:07.635 clat percentiles (usec): 00:13:07.635 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:13:07.635 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 237], 60.00th=[ 249], 00:13:07.635 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 297], 00:13:07.635 | 99.00th=[ 383], 99.50th=[ 412], 99.90th=[ 453], 99.95th=[ 1418], 00:13:07.635 | 99.99th=[ 1418] 00:13:07.635 bw ( KiB/s): min= 8192, max= 8192, per=26.69%, avg=8192.00, stdev= 0.00, samples=1 00:13:07.635 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:07.635 lat (usec) : 250=37.88%, 500=54.70%, 750=7.25%, 1000=0.13% 00:13:07.635 lat (msec) : 2=0.03% 00:13:07.635 cpu : usr=1.50%, sys=5.90%, ctx=3020, majf=0, minf=2 00:13:07.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.635 issued rwts: total=1484,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:07.635 job2: (groupid=0, jobs=1): err= 0: pid=70342: Fri Dec 6 13:06:55 2024 00:13:07.635 read: IOPS=2922, BW=11.4MiB/s (12.0MB/s)(11.4MiB/1001msec) 00:13:07.635 slat (nsec): min=11990, max=64711, avg=13820.75, stdev=2847.30 00:13:07.635 clat (usec): min=133, max=331, avg=167.89, stdev=11.92 00:13:07.635 lat (usec): min=156, max=354, avg=181.71, stdev=12.64 00:13:07.635 clat percentiles (usec): 00:13:07.635 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 159], 00:13:07.635 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:13:07.635 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 190], 00:13:07.635 | 99.00th=[ 210], 99.50th=[ 215], 99.90th=[ 247], 99.95th=[ 285], 00:13:07.635 | 99.99th=[ 330] 00:13:07.635 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:07.635 slat (usec): min=16, max=102, avg=19.47, stdev= 3.91 00:13:07.635 clat (usec): min=108, max=965, avg=130.07, stdev=20.60 00:13:07.635 lat (usec): min=126, max=982, avg=149.54, stdev=21.47 00:13:07.635 clat percentiles (usec): 00:13:07.635 | 1.00th=[ 114], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 123], 00:13:07.635 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 130], 00:13:07.635 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:13:07.635 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 206], 99.95th=[ 652], 00:13:07.635 | 99.99th=[ 963] 00:13:07.635 bw ( KiB/s): min=12288, max=12288, per=40.04%, avg=12288.00, stdev= 0.00, samples=1 00:13:07.635 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:07.635 lat (usec) : 250=99.92%, 500=0.05%, 750=0.02%, 1000=0.02% 00:13:07.635 cpu : usr=2.50%, sys=7.00%, ctx=5997, majf=0, minf=9 00:13:07.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.635 issued rwts: total=2925,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:07.635 job3: (groupid=0, jobs=1): err= 0: pid=70343: Fri Dec 6 13:06:55 2024 00:13:07.635 read: IOPS=1296, BW=5187KiB/s (5311kB/s)(5192KiB/1001msec) 00:13:07.635 slat (nsec): min=8392, max=56045, avg=16021.93, stdev=4049.65 00:13:07.635 clat (usec): min=168, max=40510, avg=409.79, stdev=1115.29 00:13:07.635 lat (usec): min=182, max=40525, avg=425.81, stdev=1115.22 00:13:07.635 clat percentiles (usec): 00:13:07.635 | 1.00th=[ 260], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 338], 00:13:07.635 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 367], 00:13:07.635 | 70.00th=[ 429], 80.00th=[ 445], 90.00th=[ 457], 95.00th=[ 469], 00:13:07.635 | 99.00th=[ 502], 99.50th=[ 519], 99.90th=[ 685], 99.95th=[40633], 00:13:07.635 | 99.99th=[40633] 00:13:07.635 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:07.635 slat (nsec): min=10356, max=82477, avg=24238.43, stdev=7518.91 00:13:07.635 clat (usec): min=116, max=647, avg=263.33, stdev=56.82 00:13:07.635 lat (usec): min=136, max=667, avg=287.57, stdev=57.61 00:13:07.635 clat percentiles (usec): 00:13:07.635 | 1.00th=[ 123], 5.00th=[ 143], 10.00th=[ 206], 20.00th=[ 227], 00:13:07.635 | 30.00th=[ 249], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 273], 00:13:07.635 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 326], 95.00th=[ 359], 00:13:07.635 | 99.00th=[ 420], 99.50th=[ 506], 99.90th=[ 603], 99.95th=[ 652], 00:13:07.635 | 99.99th=[ 652] 00:13:07.635 bw ( KiB/s): min= 6800, max= 6800, per=22.16%, avg=6800.00, stdev= 0.00, samples=1 00:13:07.635 iops : min= 1700, max= 1700, avg=1700.00, stdev= 0.00, samples=1 00:13:07.635 lat (usec) : 250=17.15%, 500=82.04%, 750=0.78% 00:13:07.635 lat (msec) : 50=0.04% 00:13:07.635 cpu : usr=1.40%, sys=4.70%, ctx=2834, majf=0, minf=17 00:13:07.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.635 issued rwts: total=1298,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:07.635 00:13:07.635 Run status group 0 (all jobs): 00:13:07.635 READ: bw=27.3MiB/s (28.7MB/s), 5187KiB/s-11.4MiB/s (5311kB/s-12.0MB/s), io=27.4MiB (28.7MB), run=1001-1001msec 00:13:07.635 WRITE: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:13:07.635 00:13:07.635 Disk stats (read/write): 00:13:07.635 nvme0n1: ios=1071/1377, merge=0/0, ticks=451/351, in_queue=802, util=87.54% 00:13:07.635 nvme0n2: ios=1161/1536, merge=0/0, ticks=431/383, in_queue=814, util=87.86% 00:13:07.635 nvme0n3: ios=2553/2560, merge=0/0, ticks=441/350, in_queue=791, util=89.18% 00:13:07.635 nvme0n4: ios=1024/1377, merge=0/0, ticks=443/366, in_queue=809, util=89.74% 00:13:07.635 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:07.635 [global] 00:13:07.635 thread=1 00:13:07.635 invalidate=1 00:13:07.635 rw=write 00:13:07.635 time_based=1 00:13:07.635 runtime=1 00:13:07.635 ioengine=libaio 00:13:07.635 direct=1 00:13:07.635 bs=4096 00:13:07.635 iodepth=128 00:13:07.635 norandommap=0 00:13:07.635 numjobs=1 00:13:07.635 00:13:07.635 verify_dump=1 00:13:07.635 verify_backlog=512 00:13:07.635 verify_state_save=0 00:13:07.635 do_verify=1 00:13:07.635 verify=crc32c-intel 00:13:07.635 [job0] 00:13:07.635 filename=/dev/nvme0n1 00:13:07.635 [job1] 00:13:07.635 filename=/dev/nvme0n2 00:13:07.635 [job2] 00:13:07.635 filename=/dev/nvme0n3 00:13:07.635 [job3] 00:13:07.635 filename=/dev/nvme0n4 00:13:07.635 Could not set queue depth (nvme0n1) 00:13:07.635 Could not set queue depth (nvme0n2) 00:13:07.635 Could not set queue depth (nvme0n3) 00:13:07.635 Could not set queue depth (nvme0n4) 00:13:07.635 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:07.635 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:07.635 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:07.635 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:07.635 fio-3.35 00:13:07.635 Starting 4 threads 00:13:09.010 00:13:09.010 job0: (groupid=0, jobs=1): err= 0: pid=70404: Fri Dec 6 13:06:57 2024 00:13:09.010 read: IOPS=2074, BW=8299KiB/s (8498kB/s)(8324KiB/1003msec) 00:13:09.010 slat (usec): min=4, max=15847, avg=185.90, stdev=924.33 00:13:09.010 clat (usec): min=1337, max=53190, avg=23949.35, stdev=7700.88 00:13:09.010 lat (usec): min=2896, max=53229, avg=24135.26, stdev=7732.79 00:13:09.010 clat percentiles (usec): 00:13:09.010 | 1.00th=[13042], 5.00th=[17695], 10.00th=[17957], 20.00th=[18482], 00:13:09.010 | 30.00th=[18744], 40.00th=[18744], 50.00th=[20841], 60.00th=[24773], 00:13:09.010 | 70.00th=[25297], 80.00th=[28705], 90.00th=[34866], 95.00th=[42206], 00:13:09.010 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[50070], 00:13:09.010 | 99.99th=[53216] 00:13:09.010 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:13:09.010 slat (usec): min=10, max=9957, avg=231.15, stdev=1018.51 00:13:09.010 clat (usec): min=13285, max=62435, avg=29857.82, stdev=8922.33 00:13:09.010 lat (usec): min=13309, max=62459, avg=30088.97, stdev=8956.43 00:13:09.010 clat percentiles (usec): 00:13:09.010 | 1.00th=[16909], 5.00th=[21890], 10.00th=[22938], 20.00th=[23725], 00:13:09.010 | 30.00th=[24511], 40.00th=[25822], 50.00th=[28181], 60.00th=[29230], 00:13:09.010 | 70.00th=[30016], 80.00th=[32900], 90.00th=[43254], 95.00th=[53216], 00:13:09.010 | 99.00th=[58983], 99.50th=[61604], 99.90th=[62653], 99.95th=[62653], 00:13:09.010 | 99.99th=[62653] 00:13:09.010 bw ( KiB/s): min= 9571, max=10168, per=16.02%, avg=9869.50, stdev=422.14, samples=2 00:13:09.010 iops : min= 2392, max= 2542, avg=2467.00, stdev=106.07, samples=2 00:13:09.010 lat (msec) : 2=0.02%, 4=0.02%, 10=0.17%, 20=22.34%, 50=73.91% 00:13:09.010 lat (msec) : 100=3.53% 00:13:09.010 cpu : usr=2.79%, sys=7.29%, ctx=421, majf=0, minf=13 00:13:09.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:13:09.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:09.010 issued rwts: total=2081,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:09.010 job1: (groupid=0, jobs=1): err= 0: pid=70405: Fri Dec 6 13:06:57 2024 00:13:09.010 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:13:09.010 slat (usec): min=4, max=3553, avg=78.81, stdev=395.77 00:13:09.010 clat (usec): min=7178, max=14410, avg=10519.20, stdev=1164.25 00:13:09.010 lat (usec): min=7282, max=15193, avg=10598.01, stdev=1190.40 00:13:09.010 clat percentiles (usec): 00:13:09.010 | 1.00th=[ 7898], 5.00th=[ 8979], 10.00th=[ 9110], 20.00th=[ 9372], 00:13:09.010 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10552], 60.00th=[10945], 00:13:09.010 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12387], 00:13:09.010 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14222], 99.95th=[14353], 00:13:09.010 | 99.99th=[14353] 00:13:09.010 write: IOPS=6223, BW=24.3MiB/s (25.5MB/s)(24.4MiB/1002msec); 0 zone resets 00:13:09.010 slat (usec): min=9, max=3059, avg=75.57, stdev=341.17 00:13:09.010 clat (usec): min=253, max=13965, avg=9951.29, stdev=1483.54 00:13:09.011 lat (usec): min=2208, max=13983, avg=10026.86, stdev=1469.95 00:13:09.011 clat percentiles (usec): 00:13:09.011 | 1.00th=[ 5800], 5.00th=[ 7308], 10.00th=[ 7963], 20.00th=[ 8979], 00:13:09.011 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10552], 00:13:09.011 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:13:09.011 | 99.00th=[12387], 99.50th=[12911], 99.90th=[13960], 99.95th=[13960], 00:13:09.011 | 99.99th=[13960] 00:13:09.011 bw ( KiB/s): min=24576, max=24625, per=39.92%, avg=24600.50, stdev=34.65, samples=2 00:13:09.011 iops : min= 6144, max= 6156, avg=6150.00, stdev= 8.49, samples=2 00:13:09.011 lat (usec) : 500=0.01% 00:13:09.011 lat (msec) : 4=0.39%, 10=43.44%, 20=56.16% 00:13:09.011 cpu : usr=4.50%, sys=16.38%, ctx=510, majf=0, minf=8 00:13:09.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:09.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:09.011 issued rwts: total=6144,6236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:09.011 job2: (groupid=0, jobs=1): err= 0: pid=70406: Fri Dec 6 13:06:57 2024 00:13:09.011 read: IOPS=2418, BW=9675KiB/s (9907kB/s)(9704KiB/1003msec) 00:13:09.011 slat (usec): min=6, max=14892, avg=176.69, stdev=1080.66 00:13:09.011 clat (usec): min=1519, max=49075, avg=20821.03, stdev=6742.21 00:13:09.011 lat (usec): min=10899, max=49092, avg=20997.72, stdev=6838.13 00:13:09.011 clat percentiles (usec): 00:13:09.011 | 1.00th=[11600], 5.00th=[13304], 10.00th=[15008], 20.00th=[15533], 00:13:09.011 | 30.00th=[15664], 40.00th=[16057], 50.00th=[17957], 60.00th=[21890], 00:13:09.011 | 70.00th=[23987], 80.00th=[28443], 90.00th=[29754], 95.00th=[30278], 00:13:09.011 | 99.00th=[43254], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:13:09.011 | 99.99th=[49021] 00:13:09.011 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:13:09.011 slat (usec): min=13, max=8682, avg=214.06, stdev=759.97 00:13:09.011 clat (usec): min=12978, max=58512, avg=29702.35, stdev=9629.79 00:13:09.011 lat (usec): min=13002, max=58538, avg=29916.41, stdev=9698.68 00:13:09.011 clat percentiles (usec): 00:13:09.011 | 1.00th=[14353], 5.00th=[18744], 10.00th=[19530], 20.00th=[20317], 00:13:09.011 | 30.00th=[23462], 40.00th=[24249], 50.00th=[27395], 60.00th=[30802], 00:13:09.011 | 70.00th=[32637], 80.00th=[38536], 90.00th=[45351], 95.00th=[49546], 00:13:09.011 | 99.00th=[52691], 99.50th=[57934], 99.90th=[58459], 99.95th=[58459], 00:13:09.011 | 99.99th=[58459] 00:13:09.011 bw ( KiB/s): min= 8192, max=12288, per=16.62%, avg=10240.00, stdev=2896.31, samples=2 00:13:09.011 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:13:09.011 lat (msec) : 2=0.02%, 20=35.20%, 50=62.37%, 100=2.41% 00:13:09.011 cpu : usr=2.69%, sys=8.58%, ctx=345, majf=0, minf=7 00:13:09.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:13:09.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:09.011 issued rwts: total=2426,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:09.011 job3: (groupid=0, jobs=1): err= 0: pid=70407: Fri Dec 6 13:06:57 2024 00:13:09.011 read: IOPS=3701, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1003msec) 00:13:09.011 slat (usec): min=4, max=8882, avg=124.64, stdev=607.40 00:13:09.011 clat (usec): min=1019, max=44386, avg=15637.51, stdev=6199.20 00:13:09.011 lat (usec): min=2483, max=44399, avg=15762.15, stdev=6240.82 00:13:09.011 clat percentiles (usec): 00:13:09.011 | 1.00th=[ 8979], 5.00th=[10290], 10.00th=[11469], 20.00th=[12125], 00:13:09.011 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12780], 60.00th=[13435], 00:13:09.011 | 70.00th=[14877], 80.00th=[20579], 90.00th=[25035], 95.00th=[28181], 00:13:09.011 | 99.00th=[37487], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:13:09.011 | 99.99th=[44303] 00:13:09.011 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:13:09.011 slat (usec): min=10, max=9341, avg=123.98, stdev=650.41 00:13:09.011 clat (usec): min=7164, max=41531, avg=16773.25, stdev=7566.39 00:13:09.011 lat (usec): min=7177, max=41550, avg=16897.24, stdev=7606.20 00:13:09.011 clat percentiles (usec): 00:13:09.011 | 1.00th=[ 8225], 5.00th=[10552], 10.00th=[11731], 20.00th=[12256], 00:13:09.011 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13304], 00:13:09.011 | 70.00th=[15008], 80.00th=[24511], 90.00th=[30016], 95.00th=[30802], 00:13:09.011 | 99.00th=[39060], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:13:09.011 | 99.99th=[41681] 00:13:09.011 bw ( KiB/s): min=12288, max=20521, per=26.62%, avg=16404.50, stdev=5821.61, samples=2 00:13:09.011 iops : min= 3072, max= 5130, avg=4101.00, stdev=1455.23, samples=2 00:13:09.011 lat (msec) : 2=0.01%, 4=0.01%, 10=4.05%, 20=73.22%, 50=22.70% 00:13:09.011 cpu : usr=3.69%, sys=10.78%, ctx=573, majf=0, minf=6 00:13:09.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:09.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:09.011 issued rwts: total=3713,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:09.011 00:13:09.011 Run status group 0 (all jobs): 00:13:09.011 READ: bw=55.9MiB/s (58.7MB/s), 8299KiB/s-24.0MiB/s (8498kB/s-25.1MB/s), io=56.1MiB (58.8MB), run=1002-1003msec 00:13:09.011 WRITE: bw=60.2MiB/s (63.1MB/s), 9.97MiB/s-24.3MiB/s (10.5MB/s-25.5MB/s), io=60.4MiB (63.3MB), run=1002-1003msec 00:13:09.011 00:13:09.011 Disk stats (read/write): 00:13:09.011 nvme0n1: ios=2037/2048, merge=0/0, ticks=13576/18386, in_queue=31962, util=88.38% 00:13:09.011 nvme0n2: ios=5168/5422, merge=0/0, ticks=16167/15423, in_queue=31590, util=89.08% 00:13:09.011 nvme0n3: ios=2048/2391, merge=0/0, ticks=19889/32241, in_queue=52130, util=89.30% 00:13:09.011 nvme0n4: ios=3584/3639, merge=0/0, ticks=21750/20342, in_queue=42092, util=89.23% 00:13:09.011 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:09.011 [global] 00:13:09.011 thread=1 00:13:09.011 invalidate=1 00:13:09.011 rw=randwrite 00:13:09.011 time_based=1 00:13:09.011 runtime=1 00:13:09.011 ioengine=libaio 00:13:09.011 direct=1 00:13:09.011 bs=4096 00:13:09.011 iodepth=128 00:13:09.011 norandommap=0 00:13:09.011 numjobs=1 00:13:09.011 00:13:09.011 verify_dump=1 00:13:09.011 verify_backlog=512 00:13:09.011 verify_state_save=0 00:13:09.011 do_verify=1 00:13:09.011 verify=crc32c-intel 00:13:09.011 [job0] 00:13:09.011 filename=/dev/nvme0n1 00:13:09.011 [job1] 00:13:09.011 filename=/dev/nvme0n2 00:13:09.011 [job2] 00:13:09.011 filename=/dev/nvme0n3 00:13:09.011 [job3] 00:13:09.011 filename=/dev/nvme0n4 00:13:09.011 Could not set queue depth (nvme0n1) 00:13:09.011 Could not set queue depth (nvme0n2) 00:13:09.011 Could not set queue depth (nvme0n3) 00:13:09.011 Could not set queue depth (nvme0n4) 00:13:09.011 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:09.011 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:09.011 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:09.011 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:09.011 fio-3.35 00:13:09.011 Starting 4 threads 00:13:10.385 00:13:10.385 job0: (groupid=0, jobs=1): err= 0: pid=70460: Fri Dec 6 13:06:58 2024 00:13:10.385 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:13:10.385 slat (usec): min=4, max=10304, avg=95.93, stdev=616.42 00:13:10.385 clat (usec): min=3785, max=21416, avg=12094.11, stdev=3034.34 00:13:10.385 lat (usec): min=3798, max=21445, avg=12190.04, stdev=3061.64 00:13:10.385 clat percentiles (usec): 00:13:10.385 | 1.00th=[ 5276], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9634], 00:13:10.385 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11863], 00:13:10.385 | 70.00th=[13173], 80.00th=[13960], 90.00th=[16712], 95.00th=[19006], 00:13:10.385 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21365], 99.95th=[21365], 00:13:10.385 | 99.99th=[21365] 00:13:10.385 write: IOPS=5614, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:13:10.385 slat (usec): min=5, max=9017, avg=74.07, stdev=291.62 00:13:10.385 clat (usec): min=3638, max=21257, avg=10491.04, stdev=2173.90 00:13:10.385 lat (usec): min=3658, max=21265, avg=10565.12, stdev=2193.44 00:13:10.385 clat percentiles (usec): 00:13:10.385 | 1.00th=[ 4686], 5.00th=[ 5538], 10.00th=[ 6456], 20.00th=[ 9241], 00:13:10.385 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:13:10.385 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11863], 95.00th=[11863], 00:13:10.385 | 99.00th=[12125], 99.50th=[12256], 99.90th=[21103], 99.95th=[21103], 00:13:10.385 | 99.99th=[21365] 00:13:10.385 bw ( KiB/s): min=20752, max=24304, per=34.81%, avg=22528.00, stdev=2511.64, samples=2 00:13:10.385 iops : min= 5188, max= 6076, avg=5632.00, stdev=627.91, samples=2 00:13:10.385 lat (msec) : 4=0.12%, 10=23.26%, 20=75.35%, 50=1.27% 00:13:10.385 cpu : usr=4.69%, sys=12.76%, ctx=888, majf=0, minf=13 00:13:10.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:10.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:10.385 issued rwts: total=5632,5637,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:10.385 job1: (groupid=0, jobs=1): err= 0: pid=70461: Fri Dec 6 13:06:58 2024 00:13:10.385 read: IOPS=2507, BW=9.79MiB/s (10.3MB/s)(10.0MiB/1021msec) 00:13:10.385 slat (usec): min=4, max=21425, avg=168.22, stdev=1174.23 00:13:10.385 clat (usec): min=10095, max=45374, avg=20954.12, stdev=7073.18 00:13:10.385 lat (usec): min=10108, max=45390, avg=21122.34, stdev=7155.30 00:13:10.385 clat percentiles (usec): 00:13:10.385 | 1.00th=[10159], 5.00th=[11207], 10.00th=[12125], 20.00th=[13698], 00:13:10.385 | 30.00th=[16188], 40.00th=[19268], 50.00th=[21365], 60.00th=[22676], 00:13:10.385 | 70.00th=[23462], 80.00th=[25560], 90.00th=[30802], 95.00th=[34341], 00:13:10.385 | 99.00th=[41157], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:13:10.385 | 99.99th=[45351] 00:13:10.385 write: IOPS=2834, BW=11.1MiB/s (11.6MB/s)(11.3MiB/1021msec); 0 zone resets 00:13:10.385 slat (usec): min=4, max=17553, avg=188.74, stdev=1023.16 00:13:10.385 clat (msec): min=8, max=117, avg=26.18, stdev=18.27 00:13:10.385 lat (msec): min=8, max=117, avg=26.37, stdev=18.38 00:13:10.385 clat percentiles (msec): 00:13:10.385 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:13:10.385 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:13:10.385 | 70.00th=[ 25], 80.00th=[ 28], 90.00th=[ 39], 95.00th=[ 60], 00:13:10.385 | 99.00th=[ 115], 99.50th=[ 116], 99.90th=[ 118], 99.95th=[ 118], 00:13:10.385 | 99.99th=[ 118] 00:13:10.385 bw ( KiB/s): min= 9136, max=13000, per=17.10%, avg=11068.00, stdev=2732.26, samples=2 00:13:10.385 iops : min= 2284, max= 3250, avg=2767.00, stdev=683.07, samples=2 00:13:10.385 lat (msec) : 10=2.93%, 20=31.22%, 50=62.78%, 100=1.91%, 250=1.16% 00:13:10.385 cpu : usr=1.57%, sys=8.43%, ctx=321, majf=0, minf=7 00:13:10.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:10.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:10.385 issued rwts: total=2560,2894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:10.385 job2: (groupid=0, jobs=1): err= 0: pid=70462: Fri Dec 6 13:06:58 2024 00:13:10.385 read: IOPS=2507, BW=9.79MiB/s (10.3MB/s)(10.0MiB/1021msec) 00:13:10.385 slat (usec): min=4, max=23317, avg=214.48, stdev=1448.55 00:13:10.385 clat (msec): min=6, max=128, avg=24.56, stdev=18.33 00:13:10.385 lat (msec): min=6, max=128, avg=24.78, stdev=18.48 00:13:10.385 clat percentiles (msec): 00:13:10.385 | 1.00th=[ 8], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:13:10.385 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 20], 60.00th=[ 24], 00:13:10.385 | 70.00th=[ 25], 80.00th=[ 28], 90.00th=[ 44], 95.00th=[ 60], 00:13:10.385 | 99.00th=[ 116], 99.50th=[ 127], 99.90th=[ 129], 99.95th=[ 129], 00:13:10.385 | 99.99th=[ 129] 00:13:10.385 write: IOPS=2810, BW=11.0MiB/s (11.5MB/s)(11.2MiB/1021msec); 0 zone resets 00:13:10.385 slat (usec): min=5, max=26623, avg=149.37, stdev=965.50 00:13:10.385 clat (msec): min=4, max=128, avg=23.14, stdev=13.61 00:13:10.385 lat (msec): min=4, max=128, avg=23.29, stdev=13.68 00:13:10.386 clat percentiles (msec): 00:13:10.386 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 14], 00:13:10.386 | 30.00th=[ 17], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 25], 00:13:10.386 | 70.00th=[ 25], 80.00th=[ 26], 90.00th=[ 30], 95.00th=[ 40], 00:13:10.386 | 99.00th=[ 100], 99.50th=[ 110], 99.90th=[ 116], 99.95th=[ 129], 00:13:10.386 | 99.99th=[ 129] 00:13:10.386 bw ( KiB/s): min= 9648, max=12288, per=16.95%, avg=10968.00, stdev=1866.76, samples=2 00:13:10.386 iops : min= 2412, max= 3072, avg=2742.00, stdev=466.69, samples=2 00:13:10.386 lat (msec) : 10=3.22%, 20=39.36%, 50=52.28%, 100=3.83%, 250=1.31% 00:13:10.386 cpu : usr=2.84%, sys=6.86%, ctx=305, majf=0, minf=15 00:13:10.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:10.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:10.386 issued rwts: total=2560,2870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.386 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:10.386 job3: (groupid=0, jobs=1): err= 0: pid=70463: Fri Dec 6 13:06:58 2024 00:13:10.386 read: IOPS=4880, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1005msec) 00:13:10.386 slat (usec): min=4, max=11534, avg=109.21, stdev=687.27 00:13:10.386 clat (usec): min=2412, max=24828, avg=13679.21, stdev=3487.65 00:13:10.386 lat (usec): min=5330, max=24844, avg=13788.43, stdev=3516.23 00:13:10.386 clat percentiles (usec): 00:13:10.386 | 1.00th=[ 5997], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10945], 00:13:10.386 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12649], 60.00th=[13173], 00:13:10.386 | 70.00th=[14877], 80.00th=[16188], 90.00th=[19006], 95.00th=[21103], 00:13:10.386 | 99.00th=[23200], 99.50th=[23725], 99.90th=[24773], 99.95th=[24773], 00:13:10.386 | 99.99th=[24773] 00:13:10.386 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:13:10.386 slat (usec): min=5, max=9289, avg=83.24, stdev=324.51 00:13:10.386 clat (usec): min=4290, max=24773, avg=11754.48, stdev=2513.97 00:13:10.386 lat (usec): min=4330, max=24782, avg=11837.72, stdev=2533.88 00:13:10.386 clat percentiles (usec): 00:13:10.386 | 1.00th=[ 5145], 5.00th=[ 5932], 10.00th=[ 7046], 20.00th=[10159], 00:13:10.386 | 30.00th=[11863], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:13:10.386 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13435], 95.00th=[13566], 00:13:10.386 | 99.00th=[14091], 99.50th=[14222], 99.90th=[23725], 99.95th=[23725], 00:13:10.386 | 99.99th=[24773] 00:13:10.386 bw ( KiB/s): min=20480, max=20480, per=31.64%, avg=20480.00, stdev= 0.00, samples=2 00:13:10.386 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:13:10.386 lat (msec) : 4=0.01%, 10=12.84%, 20=83.10%, 50=4.05% 00:13:10.386 cpu : usr=4.88%, sys=11.95%, ctx=766, majf=0, minf=16 00:13:10.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:10.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:10.386 issued rwts: total=4905,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.386 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:10.386 00:13:10.386 Run status group 0 (all jobs): 00:13:10.386 READ: bw=59.9MiB/s (62.8MB/s), 9.79MiB/s-21.9MiB/s (10.3MB/s-23.0MB/s), io=61.2MiB (64.1MB), run=1004-1021msec 00:13:10.386 WRITE: bw=63.2MiB/s (66.3MB/s), 11.0MiB/s-21.9MiB/s (11.5MB/s-23.0MB/s), io=64.5MiB (67.7MB), run=1004-1021msec 00:13:10.386 00:13:10.386 Disk stats (read/write): 00:13:10.386 nvme0n1: ios=4658/5079, merge=0/0, ticks=51880/51678, in_queue=103558, util=88.18% 00:13:10.386 nvme0n2: ios=2097/2559, merge=0/0, ticks=41253/63196, in_queue=104449, util=89.18% 00:13:10.386 nvme0n3: ios=2048/2527, merge=0/0, ticks=48040/56394, in_queue=104434, util=89.09% 00:13:10.386 nvme0n4: ios=4096/4527, merge=0/0, ticks=52259/51541, in_queue=103800, util=89.74% 00:13:10.386 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:10.386 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70476 00:13:10.386 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:10.386 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:10.386 [global] 00:13:10.386 thread=1 00:13:10.386 invalidate=1 00:13:10.386 rw=read 00:13:10.386 time_based=1 00:13:10.386 runtime=10 00:13:10.386 ioengine=libaio 00:13:10.386 direct=1 00:13:10.386 bs=4096 00:13:10.386 iodepth=1 00:13:10.386 norandommap=1 00:13:10.386 numjobs=1 00:13:10.386 00:13:10.386 [job0] 00:13:10.386 filename=/dev/nvme0n1 00:13:10.386 [job1] 00:13:10.386 filename=/dev/nvme0n2 00:13:10.386 [job2] 00:13:10.386 filename=/dev/nvme0n3 00:13:10.386 [job3] 00:13:10.386 filename=/dev/nvme0n4 00:13:10.386 Could not set queue depth (nvme0n1) 00:13:10.386 Could not set queue depth (nvme0n2) 00:13:10.386 Could not set queue depth (nvme0n3) 00:13:10.386 Could not set queue depth (nvme0n4) 00:13:10.386 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:10.386 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:10.386 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:10.386 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:10.386 fio-3.35 00:13:10.386 Starting 4 threads 00:13:13.667 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:13.667 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=62803968, buflen=4096 00:13:13.667 fio: pid=70519, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:13.667 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:13.925 fio: pid=70518, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:13.925 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=53915648, buflen=4096 00:13:13.925 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:13.925 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:14.183 fio: pid=70516, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:14.183 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10022912, buflen=4096 00:13:14.183 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:14.183 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:14.440 fio: pid=70517, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:14.440 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=42254336, buflen=4096 00:13:14.440 00:13:14.440 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70516: Fri Dec 6 13:07:03 2024 00:13:14.441 read: IOPS=5269, BW=20.6MiB/s (21.6MB/s)(73.6MiB/3574msec) 00:13:14.441 slat (usec): min=11, max=13143, avg=17.72, stdev=154.76 00:13:14.441 clat (usec): min=3, max=2470, avg=170.56, stdev=47.14 00:13:14.441 lat (usec): min=152, max=13438, avg=188.27, stdev=163.73 00:13:14.441 clat percentiles (usec): 00:13:14.441 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:13:14.441 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:13:14.441 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:13:14.441 | 99.00th=[ 237], 99.50th=[ 251], 99.90th=[ 429], 99.95th=[ 635], 00:13:14.441 | 99.99th=[ 2474] 00:13:14.441 bw ( KiB/s): min=20366, max=21648, per=35.81%, avg=21163.67, stdev=454.85, samples=6 00:13:14.441 iops : min= 5091, max= 5412, avg=5290.83, stdev=113.89, samples=6 00:13:14.441 lat (usec) : 4=0.01%, 10=0.01%, 250=99.45%, 500=0.46%, 750=0.02% 00:13:14.441 lat (usec) : 1000=0.01% 00:13:14.441 lat (msec) : 2=0.01%, 4=0.04% 00:13:14.441 cpu : usr=1.79%, sys=6.94%, ctx=18841, majf=0, minf=1 00:13:14.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:14.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.441 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.441 issued rwts: total=18832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:14.441 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70517: Fri Dec 6 13:07:03 2024 00:13:14.441 read: IOPS=2644, BW=10.3MiB/s (10.8MB/s)(40.3MiB/3901msec) 00:13:14.441 slat (usec): min=8, max=14601, avg=27.85, stdev=275.84 00:13:14.441 clat (usec): min=126, max=3844, avg=348.15, stdev=158.46 00:13:14.441 lat (usec): min=139, max=14955, avg=376.00, stdev=318.47 00:13:14.441 clat percentiles (usec): 00:13:14.441 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 157], 00:13:14.441 | 30.00th=[ 172], 40.00th=[ 379], 50.00th=[ 429], 60.00th=[ 441], 00:13:14.441 | 70.00th=[ 449], 80.00th=[ 461], 90.00th=[ 494], 95.00th=[ 519], 00:13:14.441 | 99.00th=[ 594], 99.50th=[ 644], 99.90th=[ 1221], 99.95th=[ 1762], 00:13:14.441 | 99.99th=[ 2704] 00:13:14.441 bw ( KiB/s): min= 7712, max=14646, per=15.91%, avg=9401.14, stdev=2365.91, samples=7 00:13:14.441 iops : min= 1928, max= 3661, avg=2350.14, stdev=591.30, samples=7 00:13:14.441 lat (usec) : 250=34.74%, 500=56.78%, 750=8.18%, 1000=0.10% 00:13:14.441 lat (msec) : 2=0.16%, 4=0.04% 00:13:14.441 cpu : usr=1.10%, sys=4.92%, ctx=10327, majf=0, minf=2 00:13:14.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:14.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.441 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.441 issued rwts: total=10317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:14.441 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70518: Fri Dec 6 13:07:03 2024 00:13:14.441 read: IOPS=3989, BW=15.6MiB/s (16.3MB/s)(51.4MiB/3300msec) 00:13:14.441 slat (usec): min=11, max=11765, avg=17.58, stdev=125.93 00:13:14.441 clat (usec): min=138, max=51510, avg=231.75, stdev=450.93 00:13:14.441 lat (usec): min=151, max=51523, avg=249.32, stdev=468.18 00:13:14.441 clat percentiles (usec): 00:13:14.441 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 178], 00:13:14.441 | 30.00th=[ 188], 40.00th=[ 202], 50.00th=[ 235], 60.00th=[ 249], 00:13:14.441 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 302], 00:13:14.441 | 99.00th=[ 338], 99.50th=[ 371], 99.90th=[ 652], 99.95th=[ 947], 00:13:14.441 | 99.99th=[ 2737] 00:13:14.441 bw ( KiB/s): min=15024, max=16528, per=26.92%, avg=15912.33, stdev=604.47, samples=6 00:13:14.441 iops : min= 3756, max= 4132, avg=3978.00, stdev=151.22, samples=6 00:13:14.441 lat (usec) : 250=61.79%, 500=37.97%, 750=0.16%, 1000=0.04% 00:13:14.441 lat (msec) : 2=0.02%, 4=0.02%, 100=0.01% 00:13:14.441 cpu : usr=1.03%, sys=5.30%, ctx=13166, majf=0, minf=2 00:13:14.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:14.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.441 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.441 issued rwts: total=13164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:14.441 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70519: Fri Dec 6 13:07:03 2024 00:13:14.441 read: IOPS=5169, BW=20.2MiB/s (21.2MB/s)(59.9MiB/2966msec) 00:13:14.441 slat (nsec): min=11462, max=77078, avg=15575.19, stdev=4042.83 00:13:14.441 clat (usec): min=121, max=6046, avg=176.35, stdev=59.33 00:13:14.441 lat (usec): min=163, max=6060, avg=191.93, stdev=59.90 00:13:14.441 clat percentiles (usec): 00:13:14.441 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 163], 00:13:14.441 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 174], 00:13:14.441 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:13:14.441 | 99.00th=[ 260], 99.50th=[ 478], 99.90th=[ 537], 99.95th=[ 693], 00:13:14.441 | 99.99th=[ 1467] 00:13:14.441 bw ( KiB/s): min=20256, max=21720, per=35.45%, avg=20955.20, stdev=555.46, samples=5 00:13:14.441 iops : min= 5064, max= 5430, avg=5238.80, stdev=138.86, samples=5 00:13:14.441 lat (usec) : 250=98.98%, 500=0.76%, 750=0.21%, 1000=0.02% 00:13:14.441 lat (msec) : 2=0.02%, 10=0.01% 00:13:14.441 cpu : usr=1.25%, sys=6.81%, ctx=15335, majf=0, minf=1 00:13:14.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:14.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.441 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.441 issued rwts: total=15334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:14.441 00:13:14.441 Run status group 0 (all jobs): 00:13:14.441 READ: bw=57.7MiB/s (60.5MB/s), 10.3MiB/s-20.6MiB/s (10.8MB/s-21.6MB/s), io=225MiB (236MB), run=2966-3901msec 00:13:14.441 00:13:14.441 Disk stats (read/write): 00:13:14.441 nvme0n1: ios=17660/0, merge=0/0, ticks=3063/0, in_queue=3063, util=94.94% 00:13:14.441 nvme0n2: ios=10075/0, merge=0/0, ticks=3588/0, in_queue=3588, util=95.21% 00:13:14.441 nvme0n3: ios=12326/0, merge=0/0, ticks=2929/0, in_queue=2929, util=96.17% 00:13:14.441 nvme0n4: ios=14973/0, merge=0/0, ticks=2649/0, in_queue=2649, util=96.72% 00:13:14.441 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:14.441 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:15.007 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:15.007 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:15.265 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:15.265 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:15.524 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:15.524 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:15.782 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:15.782 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70476 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.350 nvmf hotplug test: fio failed as expected 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:16.350 13:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:16.608 rmmod nvme_tcp 00:13:16.608 rmmod nvme_fabrics 00:13:16.608 rmmod nvme_keyring 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 69980 ']' 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 69980 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 69980 ']' 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 69980 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69980 00:13:16.608 killing process with pid 69980 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69980' 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 69980 00:13:16.608 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 69980 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:16.868 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:13:17.125 00:13:17.125 real 0m21.087s 00:13:17.125 user 1m21.155s 00:13:17.125 sys 0m8.781s 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.125 ************************************ 00:13:17.125 END TEST nvmf_fio_target 00:13:17.125 ************************************ 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:17.125 ************************************ 00:13:17.125 START TEST nvmf_bdevio 00:13:17.125 ************************************ 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:17.125 * Looking for test storage... 00:13:17.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:13:17.125 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.384 --rc genhtml_branch_coverage=1 00:13:17.384 --rc genhtml_function_coverage=1 00:13:17.384 --rc genhtml_legend=1 00:13:17.384 --rc geninfo_all_blocks=1 00:13:17.384 --rc geninfo_unexecuted_blocks=1 00:13:17.384 00:13:17.384 ' 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.384 --rc genhtml_branch_coverage=1 00:13:17.384 --rc genhtml_function_coverage=1 00:13:17.384 --rc genhtml_legend=1 00:13:17.384 --rc geninfo_all_blocks=1 00:13:17.384 --rc geninfo_unexecuted_blocks=1 00:13:17.384 00:13:17.384 ' 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.384 --rc genhtml_branch_coverage=1 00:13:17.384 --rc genhtml_function_coverage=1 00:13:17.384 --rc genhtml_legend=1 00:13:17.384 --rc geninfo_all_blocks=1 00:13:17.384 --rc geninfo_unexecuted_blocks=1 00:13:17.384 00:13:17.384 ' 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.384 --rc genhtml_branch_coverage=1 00:13:17.384 --rc genhtml_function_coverage=1 00:13:17.384 --rc genhtml_legend=1 00:13:17.384 --rc geninfo_all_blocks=1 00:13:17.384 --rc geninfo_unexecuted_blocks=1 00:13:17.384 00:13:17.384 ' 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:17.384 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:17.385 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:17.385 Cannot find device "nvmf_init_br" 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:17.385 Cannot find device "nvmf_init_br2" 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:17.385 Cannot find device "nvmf_tgt_br" 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:13:17.385 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:17.385 Cannot find device "nvmf_tgt_br2" 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:17.386 Cannot find device "nvmf_init_br" 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:17.386 Cannot find device "nvmf_init_br2" 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:17.386 Cannot find device "nvmf_tgt_br" 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:17.386 Cannot find device "nvmf_tgt_br2" 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:17.386 Cannot find device "nvmf_br" 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:17.386 Cannot find device "nvmf_init_if" 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:13:17.386 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:17.386 Cannot find device "nvmf_init_if2" 00:13:17.386 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:13:17.386 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:17.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:17.386 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:13:17.386 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:17.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:17.386 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:13:17.386 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:17.386 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:17.644 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:17.644 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:13:17.644 00:13:17.644 --- 10.0.0.3 ping statistics --- 00:13:17.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.644 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:17.644 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:17.644 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:13:17.644 00:13:17.644 --- 10.0.0.4 ping statistics --- 00:13:17.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.644 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:17.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:13:17.644 00:13:17.644 --- 10.0.0.1 ping statistics --- 00:13:17.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.644 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:17.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:13:17.644 00:13:17.644 --- 10.0.0.2 ping statistics --- 00:13:17.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.644 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:17.644 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=70907 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 70907 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 70907 ']' 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.645 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:17.903 [2024-12-06 13:07:06.321600] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:13:17.903 [2024-12-06 13:07:06.321732] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.903 [2024-12-06 13:07:06.470796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.160 [2024-12-06 13:07:06.543732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.160 [2024-12-06 13:07:06.543800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.161 [2024-12-06 13:07:06.543815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.161 [2024-12-06 13:07:06.543825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.161 [2024-12-06 13:07:06.543834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.161 [2024-12-06 13:07:06.545136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:18.161 [2024-12-06 13:07:06.545285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:18.161 [2024-12-06 13:07:06.545399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:18.161 [2024-12-06 13:07:06.545408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.161 [2024-12-06 13:07:06.727618] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.161 Malloc0 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.161 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.161 [2024-12-06 13:07:06.795445] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:18.419 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.419 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:18.419 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:18.419 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:13:18.419 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:13:18.419 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:18.419 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:18.419 { 00:13:18.419 "params": { 00:13:18.419 "name": "Nvme$subsystem", 00:13:18.419 "trtype": "$TEST_TRANSPORT", 00:13:18.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:18.419 "adrfam": "ipv4", 00:13:18.419 "trsvcid": "$NVMF_PORT", 00:13:18.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:18.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:18.419 "hdgst": ${hdgst:-false}, 00:13:18.419 "ddgst": ${ddgst:-false} 00:13:18.419 }, 00:13:18.419 "method": "bdev_nvme_attach_controller" 00:13:18.419 } 00:13:18.419 EOF 00:13:18.419 )") 00:13:18.419 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:13:18.419 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:13:18.419 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:13:18.419 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:18.419 "params": { 00:13:18.419 "name": "Nvme1", 00:13:18.419 "trtype": "tcp", 00:13:18.419 "traddr": "10.0.0.3", 00:13:18.419 "adrfam": "ipv4", 00:13:18.419 "trsvcid": "4420", 00:13:18.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:18.419 "hdgst": false, 00:13:18.419 "ddgst": false 00:13:18.419 }, 00:13:18.419 "method": "bdev_nvme_attach_controller" 00:13:18.419 }' 00:13:18.419 [2024-12-06 13:07:06.860161] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:13:18.419 [2024-12-06 13:07:06.860262] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70953 ] 00:13:18.419 [2024-12-06 13:07:07.015730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:18.676 [2024-12-06 13:07:07.087993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.676 [2024-12-06 13:07:07.088139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.676 [2024-12-06 13:07:07.088146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.676 I/O targets: 00:13:18.676 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:18.676 00:13:18.676 00:13:18.676 CUnit - A unit testing framework for C - Version 2.1-3 00:13:18.676 http://cunit.sourceforge.net/ 00:13:18.676 00:13:18.676 00:13:18.676 Suite: bdevio tests on: Nvme1n1 00:13:18.934 Test: blockdev write read block ...passed 00:13:18.934 Test: blockdev write zeroes read block ...passed 00:13:18.934 Test: blockdev write zeroes read no split ...passed 00:13:18.934 Test: blockdev write zeroes read split ...passed 00:13:18.934 Test: blockdev write zeroes read split partial ...passed 00:13:18.934 Test: blockdev reset ...[2024-12-06 13:07:07.387580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:18.934 [2024-12-06 13:07:07.387699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ef7b0 (9): Bad file descriptor 00:13:18.934 passed 00:13:18.934 Test: blockdev write read 8 blocks ...[2024-12-06 13:07:07.401994] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:18.934 passed 00:13:18.934 Test: blockdev write read size > 128k ...passed 00:13:18.934 Test: blockdev write read invalid size ...passed 00:13:18.934 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.934 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.934 Test: blockdev write read max offset ...passed 00:13:18.934 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.934 Test: blockdev writev readv 8 blocks ...passed 00:13:18.934 Test: blockdev writev readv 30 x 1block ...passed 00:13:19.193 Test: blockdev writev readv block ...passed 00:13:19.193 Test: blockdev writev readv size > 128k ...passed 00:13:19.193 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:19.193 Test: blockdev comparev and writev ...[2024-12-06 13:07:07.577468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.193 [2024-12-06 13:07:07.577515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:19.193 [2024-12-06 13:07:07.577536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.193 [2024-12-06 13:07:07.577547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:19.193 [2024-12-06 13:07:07.578104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.193 [2024-12-06 13:07:07.578137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:19.193 [2024-12-06 13:07:07.578156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.193 [2024-12-06 13:07:07.578166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:19.193 [2024-12-06 13:07:07.578628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.193 [2024-12-06 13:07:07.578657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:19.193 [2024-12-06 13:07:07.578676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.193 [2024-12-06 13:07:07.578687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:19.193 [2024-12-06 13:07:07.579070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.193 [2024-12-06 13:07:07.579098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:19.193 [2024-12-06 13:07:07.579117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.193 [2024-12-06 13:07:07.579128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:19.193 passed 00:13:19.193 Test: blockdev nvme passthru rw ...passed 00:13:19.193 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:07:07.662827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.193 [2024-12-06 13:07:07.662862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:19.193 [2024-12-06 13:07:07.663168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.193 [2024-12-06 13:07:07.663200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:19.193 [2024-12-06 13:07:07.663409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.193 [2024-12-06 13:07:07.663438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:19.193 [2024-12-06 13:07:07.663687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.193 [2024-12-06 13:07:07.663784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:19.193 passed 00:13:19.193 Test: blockdev nvme admin passthru ...passed 00:13:19.193 Test: blockdev copy ...passed 00:13:19.193 00:13:19.193 Run Summary: Type Total Ran Passed Failed Inactive 00:13:19.193 suites 1 1 n/a 0 0 00:13:19.193 tests 23 23 23 0 0 00:13:19.193 asserts 152 152 152 0 n/a 00:13:19.193 00:13:19.193 Elapsed time = 0.890 seconds 00:13:19.452 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.452 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.452 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.452 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.452 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:19.452 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:19.452 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:19.452 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:19.452 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:19.452 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:19.452 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:19.452 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:19.452 rmmod nvme_tcp 00:13:19.452 rmmod nvme_fabrics 00:13:19.452 rmmod nvme_keyring 00:13:19.452 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 70907 ']' 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 70907 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 70907 ']' 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 70907 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70907 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:19.452 killing process with pid 70907 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70907' 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 70907 00:13:19.452 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 70907 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:19.711 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:19.968 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:19.968 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:19.968 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:19.969 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:19.969 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:19.969 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.969 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.969 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.969 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:13:19.969 00:13:19.969 real 0m2.827s 00:13:19.969 user 0m8.960s 00:13:19.969 sys 0m0.841s 00:13:19.969 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.969 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.969 ************************************ 00:13:19.969 END TEST nvmf_bdevio 00:13:19.969 ************************************ 00:13:19.969 13:07:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:19.969 00:13:19.969 real 3m36.900s 00:13:19.969 user 11m23.806s 00:13:19.969 sys 1m2.022s 00:13:19.969 13:07:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.969 13:07:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:19.969 ************************************ 00:13:19.969 END TEST nvmf_target_core 00:13:19.969 ************************************ 00:13:19.969 13:07:08 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:19.969 13:07:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:19.969 13:07:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.969 13:07:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.969 ************************************ 00:13:19.969 START TEST nvmf_target_extra 00:13:19.969 ************************************ 00:13:19.969 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:20.239 * Looking for test storage... 00:13:20.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:20.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.239 --rc genhtml_branch_coverage=1 00:13:20.239 --rc genhtml_function_coverage=1 00:13:20.239 --rc genhtml_legend=1 00:13:20.239 --rc geninfo_all_blocks=1 00:13:20.239 --rc geninfo_unexecuted_blocks=1 00:13:20.239 00:13:20.239 ' 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:20.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.239 --rc genhtml_branch_coverage=1 00:13:20.239 --rc genhtml_function_coverage=1 00:13:20.239 --rc genhtml_legend=1 00:13:20.239 --rc geninfo_all_blocks=1 00:13:20.239 --rc geninfo_unexecuted_blocks=1 00:13:20.239 00:13:20.239 ' 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:20.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.239 --rc genhtml_branch_coverage=1 00:13:20.239 --rc genhtml_function_coverage=1 00:13:20.239 --rc genhtml_legend=1 00:13:20.239 --rc geninfo_all_blocks=1 00:13:20.239 --rc geninfo_unexecuted_blocks=1 00:13:20.239 00:13:20.239 ' 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:20.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.239 --rc genhtml_branch_coverage=1 00:13:20.239 --rc genhtml_function_coverage=1 00:13:20.239 --rc genhtml_legend=1 00:13:20.239 --rc geninfo_all_blocks=1 00:13:20.239 --rc geninfo_unexecuted_blocks=1 00:13:20.239 00:13:20.239 ' 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.239 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.240 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.240 ************************************ 00:13:20.240 START TEST nvmf_example 00:13:20.240 ************************************ 00:13:20.240 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:20.240 * Looking for test storage... 00:13:20.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:20.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.502 --rc genhtml_branch_coverage=1 00:13:20.502 --rc genhtml_function_coverage=1 00:13:20.502 --rc genhtml_legend=1 00:13:20.502 --rc geninfo_all_blocks=1 00:13:20.502 --rc geninfo_unexecuted_blocks=1 00:13:20.502 00:13:20.502 ' 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:20.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.502 --rc genhtml_branch_coverage=1 00:13:20.502 --rc genhtml_function_coverage=1 00:13:20.502 --rc genhtml_legend=1 00:13:20.502 --rc geninfo_all_blocks=1 00:13:20.502 --rc geninfo_unexecuted_blocks=1 00:13:20.502 00:13:20.502 ' 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:20.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.502 --rc genhtml_branch_coverage=1 00:13:20.502 --rc genhtml_function_coverage=1 00:13:20.502 --rc genhtml_legend=1 00:13:20.502 --rc geninfo_all_blocks=1 00:13:20.502 --rc geninfo_unexecuted_blocks=1 00:13:20.502 00:13:20.502 ' 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:20.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.502 --rc genhtml_branch_coverage=1 00:13:20.502 --rc genhtml_function_coverage=1 00:13:20.502 --rc genhtml_legend=1 00:13:20.502 --rc geninfo_all_blocks=1 00:13:20.502 --rc geninfo_unexecuted_blocks=1 00:13:20.502 00:13:20.502 ' 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.502 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.502 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.503 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:20.503 Cannot find device "nvmf_init_br" 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:20.503 Cannot find device "nvmf_init_br2" 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:20.503 Cannot find device "nvmf_tgt_br" 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:20.503 Cannot find device "nvmf_tgt_br2" 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:20.503 Cannot find device "nvmf_init_br" 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:20.503 Cannot find device "nvmf_init_br2" 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:20.503 Cannot find device "nvmf_tgt_br" 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:20.503 Cannot find device "nvmf_tgt_br2" 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:20.503 Cannot find device "nvmf_br" 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:20.503 Cannot find device "nvmf_init_if" 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:13:20.503 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:20.771 Cannot find device "nvmf_init_if2" 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:20.771 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:20.771 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:20.771 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:20.771 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:13:20.771 00:13:20.771 --- 10.0.0.3 ping statistics --- 00:13:20.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.771 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:20.771 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:20.771 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:13:20.771 00:13:20.771 --- 10.0.0.4 ping statistics --- 00:13:20.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.771 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:20.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:13:20.771 00:13:20.771 --- 10.0.0.1 ping statistics --- 00:13:20.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.771 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:20.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:13:20.771 00:13:20.771 --- 10.0.0.2 ping statistics --- 00:13:20.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.771 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:20.771 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.772 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:20.772 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=71248 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 71248 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 71248 ']' 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.029 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.030 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.030 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:22.401 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:22.402 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:32.393 Initializing NVMe Controllers 00:13:32.393 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:32.393 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:32.393 Initialization complete. Launching workers. 00:13:32.393 ======================================================== 00:13:32.393 Latency(us) 00:13:32.393 Device Information : IOPS MiB/s Average min max 00:13:32.393 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14795.87 57.80 4327.19 780.44 20265.68 00:13:32.393 ======================================================== 00:13:32.393 Total : 14795.87 57.80 4327.19 780.44 20265.68 00:13:32.393 00:13:32.393 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:32.393 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:32.393 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:32.393 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.650 rmmod nvme_tcp 00:13:32.650 rmmod nvme_fabrics 00:13:32.650 rmmod nvme_keyring 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 71248 ']' 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 71248 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 71248 ']' 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 71248 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71248 00:13:32.650 killing process with pid 71248 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71248' 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 71248 00:13:32.650 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 71248 00:13:32.907 nvmf threads initialize successfully 00:13:32.907 bdev subsystem init successfully 00:13:32.907 created a nvmf target service 00:13:32.907 create targets's poll groups done 00:13:32.907 all subsystems of target started 00:13:32.907 nvmf target is running 00:13:32.907 all subsystems of target stopped 00:13:32.907 destroy targets's poll groups done 00:13:32.907 destroyed the nvmf target service 00:13:32.907 bdev subsystem finish successfully 00:13:32.907 nvmf threads destroy successfully 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:32.907 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:33.165 00:13:33.165 real 0m12.831s 00:13:33.165 user 0m45.131s 00:13:33.165 sys 0m2.173s 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:33.165 ************************************ 00:13:33.165 END TEST nvmf_example 00:13:33.165 ************************************ 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:33.165 ************************************ 00:13:33.165 START TEST nvmf_filesystem 00:13:33.165 ************************************ 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:33.165 * Looking for test storage... 00:13:33.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:13:33.165 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:33.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.428 --rc genhtml_branch_coverage=1 00:13:33.428 --rc genhtml_function_coverage=1 00:13:33.428 --rc genhtml_legend=1 00:13:33.428 --rc geninfo_all_blocks=1 00:13:33.428 --rc geninfo_unexecuted_blocks=1 00:13:33.428 00:13:33.428 ' 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:33.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.428 --rc genhtml_branch_coverage=1 00:13:33.428 --rc genhtml_function_coverage=1 00:13:33.428 --rc genhtml_legend=1 00:13:33.428 --rc geninfo_all_blocks=1 00:13:33.428 --rc geninfo_unexecuted_blocks=1 00:13:33.428 00:13:33.428 ' 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:33.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.428 --rc genhtml_branch_coverage=1 00:13:33.428 --rc genhtml_function_coverage=1 00:13:33.428 --rc genhtml_legend=1 00:13:33.428 --rc geninfo_all_blocks=1 00:13:33.428 --rc geninfo_unexecuted_blocks=1 00:13:33.428 00:13:33.428 ' 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:33.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.428 --rc genhtml_branch_coverage=1 00:13:33.428 --rc genhtml_function_coverage=1 00:13:33.428 --rc genhtml_legend=1 00:13:33.428 --rc geninfo_all_blocks=1 00:13:33.428 --rc geninfo_unexecuted_blocks=1 00:13:33.428 00:13:33.428 ' 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:33.428 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:33.429 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:33.430 #define SPDK_CONFIG_H 00:13:33.430 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:33.430 #define SPDK_CONFIG_APPS 1 00:13:33.430 #define SPDK_CONFIG_ARCH native 00:13:33.430 #undef SPDK_CONFIG_ASAN 00:13:33.430 #define SPDK_CONFIG_AVAHI 1 00:13:33.430 #undef SPDK_CONFIG_CET 00:13:33.430 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:33.430 #define SPDK_CONFIG_COVERAGE 1 00:13:33.430 #define SPDK_CONFIG_CROSS_PREFIX 00:13:33.430 #undef SPDK_CONFIG_CRYPTO 00:13:33.430 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:33.430 #undef SPDK_CONFIG_CUSTOMOCF 00:13:33.430 #undef SPDK_CONFIG_DAOS 00:13:33.430 #define SPDK_CONFIG_DAOS_DIR 00:13:33.430 #define SPDK_CONFIG_DEBUG 1 00:13:33.430 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:33.430 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:33.430 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:33.430 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:33.430 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:33.430 #undef SPDK_CONFIG_DPDK_UADK 00:13:33.430 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:33.430 #define SPDK_CONFIG_EXAMPLES 1 00:13:33.430 #undef SPDK_CONFIG_FC 00:13:33.430 #define SPDK_CONFIG_FC_PATH 00:13:33.430 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:33.430 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:33.430 #define SPDK_CONFIG_FSDEV 1 00:13:33.430 #undef SPDK_CONFIG_FUSE 00:13:33.430 #undef SPDK_CONFIG_FUZZER 00:13:33.430 #define SPDK_CONFIG_FUZZER_LIB 00:13:33.430 #define SPDK_CONFIG_GOLANG 1 00:13:33.430 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:33.430 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:33.430 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:33.430 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:33.430 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:33.430 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:33.430 #undef SPDK_CONFIG_HAVE_LZ4 00:13:33.430 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:33.430 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:33.430 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:33.430 #define SPDK_CONFIG_IDXD 1 00:13:33.430 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:33.430 #undef SPDK_CONFIG_IPSEC_MB 00:13:33.430 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:33.430 #define SPDK_CONFIG_ISAL 1 00:13:33.430 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:33.430 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:33.430 #define SPDK_CONFIG_LIBDIR 00:13:33.430 #undef SPDK_CONFIG_LTO 00:13:33.430 #define SPDK_CONFIG_MAX_LCORES 128 00:13:33.430 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:33.430 #define SPDK_CONFIG_NVME_CUSE 1 00:13:33.430 #undef SPDK_CONFIG_OCF 00:13:33.430 #define SPDK_CONFIG_OCF_PATH 00:13:33.430 #define SPDK_CONFIG_OPENSSL_PATH 00:13:33.430 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:33.430 #define SPDK_CONFIG_PGO_DIR 00:13:33.430 #undef SPDK_CONFIG_PGO_USE 00:13:33.430 #define SPDK_CONFIG_PREFIX /usr/local 00:13:33.430 #undef SPDK_CONFIG_RAID5F 00:13:33.430 #undef SPDK_CONFIG_RBD 00:13:33.430 #define SPDK_CONFIG_RDMA 1 00:13:33.430 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:33.430 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:33.430 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:33.430 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:33.430 #define SPDK_CONFIG_SHARED 1 00:13:33.430 #undef SPDK_CONFIG_SMA 00:13:33.430 #define SPDK_CONFIG_TESTS 1 00:13:33.430 #undef SPDK_CONFIG_TSAN 00:13:33.430 #define SPDK_CONFIG_UBLK 1 00:13:33.430 #define SPDK_CONFIG_UBSAN 1 00:13:33.430 #undef SPDK_CONFIG_UNIT_TESTS 00:13:33.430 #undef SPDK_CONFIG_URING 00:13:33.430 #define SPDK_CONFIG_URING_PATH 00:13:33.430 #undef SPDK_CONFIG_URING_ZNS 00:13:33.430 #define SPDK_CONFIG_USDT 1 00:13:33.430 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:33.430 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:33.430 #undef SPDK_CONFIG_VFIO_USER 00:13:33.430 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:33.430 #define SPDK_CONFIG_VHOST 1 00:13:33.430 #define SPDK_CONFIG_VIRTIO 1 00:13:33.430 #undef SPDK_CONFIG_VTUNE 00:13:33.430 #define SPDK_CONFIG_VTUNE_DIR 00:13:33.430 #define SPDK_CONFIG_WERROR 1 00:13:33.430 #define SPDK_CONFIG_WPDK_DIR 00:13:33.430 #undef SPDK_CONFIG_XNVME 00:13:33.430 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:33.430 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:33.431 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 71522 ]] 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 71522 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:33.432 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:33.433 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:33.433 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:33.433 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:33.433 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.uZCG5b 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.uZCG5b/tests/target /tmp/spdk.uZCG5b 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13978963968 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5590003712 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6256394240 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13978963968 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5590003712 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266290176 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt/output 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=93278920704 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6423859200 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:33.433 * Looking for test storage... 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13978963968 00:13:33.433 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:33.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:33.434 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:33.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.695 --rc genhtml_branch_coverage=1 00:13:33.695 --rc genhtml_function_coverage=1 00:13:33.695 --rc genhtml_legend=1 00:13:33.695 --rc geninfo_all_blocks=1 00:13:33.695 --rc geninfo_unexecuted_blocks=1 00:13:33.695 00:13:33.695 ' 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:33.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.695 --rc genhtml_branch_coverage=1 00:13:33.695 --rc genhtml_function_coverage=1 00:13:33.695 --rc genhtml_legend=1 00:13:33.695 --rc geninfo_all_blocks=1 00:13:33.695 --rc geninfo_unexecuted_blocks=1 00:13:33.695 00:13:33.695 ' 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:33.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.695 --rc genhtml_branch_coverage=1 00:13:33.695 --rc genhtml_function_coverage=1 00:13:33.695 --rc genhtml_legend=1 00:13:33.695 --rc geninfo_all_blocks=1 00:13:33.695 --rc geninfo_unexecuted_blocks=1 00:13:33.695 00:13:33.695 ' 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:33.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.695 --rc genhtml_branch_coverage=1 00:13:33.695 --rc genhtml_function_coverage=1 00:13:33.695 --rc genhtml_legend=1 00:13:33.695 --rc geninfo_all_blocks=1 00:13:33.695 --rc geninfo_unexecuted_blocks=1 00:13:33.695 00:13:33.695 ' 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:33.695 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:33.695 Cannot find device "nvmf_init_br" 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:33.695 Cannot find device "nvmf_init_br2" 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:33.695 Cannot find device "nvmf_tgt_br" 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:33.695 Cannot find device "nvmf_tgt_br2" 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:33.695 Cannot find device "nvmf_init_br" 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:33.695 Cannot find device "nvmf_init_br2" 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:33.695 Cannot find device "nvmf_tgt_br" 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:33.695 Cannot find device "nvmf_tgt_br2" 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:33.695 Cannot find device "nvmf_br" 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:33.695 Cannot find device "nvmf_init_if" 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:33.695 Cannot find device "nvmf_init_if2" 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:33.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:33.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:33.695 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:33.953 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:33.953 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:13:33.953 00:13:33.953 --- 10.0.0.3 ping statistics --- 00:13:33.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.953 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:33.953 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:33.953 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:13:33.953 00:13:33.953 --- 10.0.0.4 ping statistics --- 00:13:33.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.953 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:33.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:33.953 00:13:33.953 --- 10.0.0.1 ping statistics --- 00:13:33.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.953 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:33.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:13:33.953 00:13:33.953 --- 10.0.0.2 ping statistics --- 00:13:33.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.953 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:33.953 ************************************ 00:13:33.953 START TEST nvmf_filesystem_no_in_capsule 00:13:33.953 ************************************ 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71712 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71712 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71712 ']' 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.953 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.954 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.954 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.954 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.210 [2024-12-06 13:07:22.638327] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:13:34.210 [2024-12-06 13:07:22.638652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.210 [2024-12-06 13:07:22.793995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.468 [2024-12-06 13:07:22.860597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.469 [2024-12-06 13:07:22.860660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.469 [2024-12-06 13:07:22.860675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.469 [2024-12-06 13:07:22.860686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.469 [2024-12-06 13:07:22.860695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.469 [2024-12-06 13:07:22.862081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.469 [2024-12-06 13:07:22.862164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.469 [2024-12-06 13:07:22.862274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.469 [2024-12-06 13:07:22.862277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.469 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.469 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:34.469 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:34.469 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:34.469 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.469 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.469 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:34.469 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:34.469 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.469 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.469 [2024-12-06 13:07:23.038766] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.469 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.469 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:34.469 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.469 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.727 Malloc1 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.727 [2024-12-06 13:07:23.210639] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:34.727 { 00:13:34.727 "aliases": [ 00:13:34.727 "21556905-038d-40b6-9443-34c547377525" 00:13:34.727 ], 00:13:34.727 "assigned_rate_limits": { 00:13:34.727 "r_mbytes_per_sec": 0, 00:13:34.727 "rw_ios_per_sec": 0, 00:13:34.727 "rw_mbytes_per_sec": 0, 00:13:34.727 "w_mbytes_per_sec": 0 00:13:34.727 }, 00:13:34.727 "block_size": 512, 00:13:34.727 "claim_type": "exclusive_write", 00:13:34.727 "claimed": true, 00:13:34.727 "driver_specific": {}, 00:13:34.727 "memory_domains": [ 00:13:34.727 { 00:13:34.727 "dma_device_id": "system", 00:13:34.727 "dma_device_type": 1 00:13:34.727 }, 00:13:34.727 { 00:13:34.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.727 "dma_device_type": 2 00:13:34.727 } 00:13:34.727 ], 00:13:34.727 "name": "Malloc1", 00:13:34.727 "num_blocks": 1048576, 00:13:34.727 "product_name": "Malloc disk", 00:13:34.727 "supported_io_types": { 00:13:34.727 "abort": true, 00:13:34.727 "compare": false, 00:13:34.727 "compare_and_write": false, 00:13:34.727 "copy": true, 00:13:34.727 "flush": true, 00:13:34.727 "get_zone_info": false, 00:13:34.727 "nvme_admin": false, 00:13:34.727 "nvme_io": false, 00:13:34.727 "nvme_io_md": false, 00:13:34.727 "nvme_iov_md": false, 00:13:34.727 "read": true, 00:13:34.727 "reset": true, 00:13:34.727 "seek_data": false, 00:13:34.727 "seek_hole": false, 00:13:34.727 "unmap": true, 00:13:34.727 "write": true, 00:13:34.727 "write_zeroes": true, 00:13:34.727 "zcopy": true, 00:13:34.727 "zone_append": false, 00:13:34.727 "zone_management": false 00:13:34.727 }, 00:13:34.727 "uuid": "21556905-038d-40b6-9443-34c547377525", 00:13:34.727 "zoned": false 00:13:34.727 } 00:13:34.727 ]' 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:34.727 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:34.728 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:34.728 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:34.728 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:34.728 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:34.985 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.985 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:34.985 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.985 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:34.985 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:36.961 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:37.219 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:37.219 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:38.152 ************************************ 00:13:38.152 START TEST filesystem_ext4 00:13:38.152 ************************************ 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:38.152 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:38.152 mke2fs 1.47.0 (5-Feb-2023) 00:13:38.410 Discarding device blocks: 0/522240 done 00:13:38.410 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:38.410 Filesystem UUID: e8f5e674-0b3f-4883-af27-9ba516ea01dd 00:13:38.411 Superblock backups stored on blocks: 00:13:38.411 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:38.411 00:13:38.411 Allocating group tables: 0/64 done 00:13:38.411 Writing inode tables: 0/64 done 00:13:38.411 Creating journal (8192 blocks): done 00:13:38.411 Writing superblocks and filesystem accounting information: 0/64 done 00:13:38.411 00:13:38.411 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:38.411 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:43.678 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:43.678 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:43.678 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:43.678 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:43.678 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:43.678 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:43.678 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71712 00:13:43.678 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:43.678 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:43.678 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:43.678 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:43.678 00:13:43.678 real 0m5.612s 00:13:43.678 user 0m0.025s 00:13:43.678 sys 0m0.060s 00:13:43.678 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.678 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:43.678 ************************************ 00:13:43.678 END TEST filesystem_ext4 00:13:43.678 ************************************ 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:43.936 ************************************ 00:13:43.936 START TEST filesystem_btrfs 00:13:43.936 ************************************ 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:43.936 btrfs-progs v6.8.1 00:13:43.936 See https://btrfs.readthedocs.io for more information. 00:13:43.936 00:13:43.936 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:43.936 NOTE: several default settings have changed in version 5.15, please make sure 00:13:43.936 this does not affect your deployments: 00:13:43.936 - DUP for metadata (-m dup) 00:13:43.936 - enabled no-holes (-O no-holes) 00:13:43.936 - enabled free-space-tree (-R free-space-tree) 00:13:43.936 00:13:43.936 Label: (null) 00:13:43.936 UUID: ccc75d9f-8d80-43f2-9307-6124bc669877 00:13:43.936 Node size: 16384 00:13:43.936 Sector size: 4096 (CPU page size: 4096) 00:13:43.936 Filesystem size: 510.00MiB 00:13:43.936 Block group profiles: 00:13:43.936 Data: single 8.00MiB 00:13:43.936 Metadata: DUP 32.00MiB 00:13:43.936 System: DUP 8.00MiB 00:13:43.936 SSD detected: yes 00:13:43.936 Zoned device: no 00:13:43.936 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:43.936 Checksum: crc32c 00:13:43.936 Number of devices: 1 00:13:43.936 Devices: 00:13:43.936 ID SIZE PATH 00:13:43.936 1 510.00MiB /dev/nvme0n1p1 00:13:43.936 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:43.936 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71712 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:44.195 00:13:44.195 real 0m0.239s 00:13:44.195 user 0m0.018s 00:13:44.195 sys 0m0.066s 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:44.195 ************************************ 00:13:44.195 END TEST filesystem_btrfs 00:13:44.195 ************************************ 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.195 ************************************ 00:13:44.195 START TEST filesystem_xfs 00:13:44.195 ************************************ 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:44.195 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:44.195 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:44.195 = sectsz=512 attr=2, projid32bit=1 00:13:44.195 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:44.195 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:44.195 data = bsize=4096 blocks=130560, imaxpct=25 00:13:44.195 = sunit=0 swidth=0 blks 00:13:44.195 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:44.195 log =internal log bsize=4096 blocks=16384, version=2 00:13:44.195 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:44.195 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:45.194 Discarding blocks...Done. 00:13:45.194 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:45.194 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:47.096 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:47.096 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:47.096 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:47.096 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:47.096 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:47.096 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71712 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:47.354 00:13:47.354 real 0m3.123s 00:13:47.354 user 0m0.019s 00:13:47.354 sys 0m0.056s 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:47.354 ************************************ 00:13:47.354 END TEST filesystem_xfs 00:13:47.354 ************************************ 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71712 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71712 ']' 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71712 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71712 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:47.354 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:47.355 killing process with pid 71712 00:13:47.355 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71712' 00:13:47.355 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 71712 00:13:47.355 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 71712 00:13:47.920 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:47.920 00:13:47.920 real 0m13.770s 00:13:47.920 user 0m52.355s 00:13:47.920 sys 0m2.059s 00:13:47.920 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:47.921 ************************************ 00:13:47.921 END TEST nvmf_filesystem_no_in_capsule 00:13:47.921 ************************************ 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:47.921 ************************************ 00:13:47.921 START TEST nvmf_filesystem_in_capsule 00:13:47.921 ************************************ 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=72071 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 72071 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 72071 ']' 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.921 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:47.921 [2024-12-06 13:07:36.473070] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:13:47.921 [2024-12-06 13:07:36.473213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.178 [2024-12-06 13:07:36.634108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.178 [2024-12-06 13:07:36.700684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.178 [2024-12-06 13:07:36.700736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.178 [2024-12-06 13:07:36.700748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.178 [2024-12-06 13:07:36.700757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.178 [2024-12-06 13:07:36.700765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.178 [2024-12-06 13:07:36.701851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.178 [2024-12-06 13:07:36.702432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.178 [2024-12-06 13:07:36.702559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.178 [2024-12-06 13:07:36.702668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.108 [2024-12-06 13:07:37.548067] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.108 Malloc1 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.108 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.109 [2024-12-06 13:07:37.725914] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.109 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.366 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.366 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:49.366 { 00:13:49.366 "aliases": [ 00:13:49.366 "61414fd3-fc30-445e-901d-a1c397a65404" 00:13:49.366 ], 00:13:49.366 "assigned_rate_limits": { 00:13:49.366 "r_mbytes_per_sec": 0, 00:13:49.366 "rw_ios_per_sec": 0, 00:13:49.366 "rw_mbytes_per_sec": 0, 00:13:49.366 "w_mbytes_per_sec": 0 00:13:49.366 }, 00:13:49.366 "block_size": 512, 00:13:49.366 "claim_type": "exclusive_write", 00:13:49.366 "claimed": true, 00:13:49.366 "driver_specific": {}, 00:13:49.366 "memory_domains": [ 00:13:49.366 { 00:13:49.366 "dma_device_id": "system", 00:13:49.366 "dma_device_type": 1 00:13:49.366 }, 00:13:49.366 { 00:13:49.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.366 "dma_device_type": 2 00:13:49.366 } 00:13:49.366 ], 00:13:49.366 "name": "Malloc1", 00:13:49.366 "num_blocks": 1048576, 00:13:49.366 "product_name": "Malloc disk", 00:13:49.366 "supported_io_types": { 00:13:49.366 "abort": true, 00:13:49.366 "compare": false, 00:13:49.366 "compare_and_write": false, 00:13:49.366 "copy": true, 00:13:49.366 "flush": true, 00:13:49.366 "get_zone_info": false, 00:13:49.366 "nvme_admin": false, 00:13:49.366 "nvme_io": false, 00:13:49.366 "nvme_io_md": false, 00:13:49.366 "nvme_iov_md": false, 00:13:49.366 "read": true, 00:13:49.366 "reset": true, 00:13:49.366 "seek_data": false, 00:13:49.366 "seek_hole": false, 00:13:49.366 "unmap": true, 00:13:49.366 "write": true, 00:13:49.366 "write_zeroes": true, 00:13:49.366 "zcopy": true, 00:13:49.366 "zone_append": false, 00:13:49.366 "zone_management": false 00:13:49.366 }, 00:13:49.366 "uuid": "61414fd3-fc30-445e-901d-a1c397a65404", 00:13:49.366 "zoned": false 00:13:49.366 } 00:13:49.366 ]' 00:13:49.366 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:49.366 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:49.366 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:49.366 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:49.366 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:49.366 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:49.366 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:49.366 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:49.623 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.623 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:49.623 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.623 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:49.623 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:51.521 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:51.779 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:52.713 ************************************ 00:13:52.713 START TEST filesystem_in_capsule_ext4 00:13:52.713 ************************************ 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:52.713 mke2fs 1.47.0 (5-Feb-2023) 00:13:52.713 Discarding device blocks: 0/522240 done 00:13:52.713 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:52.713 Filesystem UUID: 006951b0-658a-4abf-b0b3-15e2f5db4130 00:13:52.713 Superblock backups stored on blocks: 00:13:52.713 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:52.713 00:13:52.713 Allocating group tables: 0/64 done 00:13:52.713 Writing inode tables: 0/64 done 00:13:52.713 Creating journal (8192 blocks): done 00:13:52.713 Writing superblocks and filesystem accounting information: 0/64 done 00:13:52.713 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:52.713 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:57.972 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 72071 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:58.230 00:13:58.230 real 0m5.531s 00:13:58.230 user 0m0.024s 00:13:58.230 sys 0m0.056s 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:58.230 ************************************ 00:13:58.230 END TEST filesystem_in_capsule_ext4 00:13:58.230 ************************************ 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:58.230 ************************************ 00:13:58.230 START TEST filesystem_in_capsule_btrfs 00:13:58.230 ************************************ 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:58.230 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:58.543 btrfs-progs v6.8.1 00:13:58.543 See https://btrfs.readthedocs.io for more information. 00:13:58.543 00:13:58.543 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:58.543 NOTE: several default settings have changed in version 5.15, please make sure 00:13:58.543 this does not affect your deployments: 00:13:58.543 - DUP for metadata (-m dup) 00:13:58.543 - enabled no-holes (-O no-holes) 00:13:58.543 - enabled free-space-tree (-R free-space-tree) 00:13:58.543 00:13:58.543 Label: (null) 00:13:58.543 UUID: c5c76811-41f3-4fac-bd17-03c05ab01ed7 00:13:58.543 Node size: 16384 00:13:58.543 Sector size: 4096 (CPU page size: 4096) 00:13:58.543 Filesystem size: 510.00MiB 00:13:58.543 Block group profiles: 00:13:58.543 Data: single 8.00MiB 00:13:58.543 Metadata: DUP 32.00MiB 00:13:58.543 System: DUP 8.00MiB 00:13:58.543 SSD detected: yes 00:13:58.543 Zoned device: no 00:13:58.543 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:58.543 Checksum: crc32c 00:13:58.543 Number of devices: 1 00:13:58.543 Devices: 00:13:58.543 ID SIZE PATH 00:13:58.543 1 510.00MiB /dev/nvme0n1p1 00:13:58.543 00:13:58.543 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:58.543 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:58.543 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:58.543 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:58.543 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:58.543 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:58.543 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:58.543 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:58.543 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 72071 00:13:58.543 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:58.543 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:58.543 00:13:58.543 real 0m0.266s 00:13:58.543 user 0m0.018s 00:13:58.543 sys 0m0.068s 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:58.543 ************************************ 00:13:58.543 END TEST filesystem_in_capsule_btrfs 00:13:58.543 ************************************ 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:58.543 ************************************ 00:13:58.543 START TEST filesystem_in_capsule_xfs 00:13:58.543 ************************************ 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:58.543 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:58.543 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:58.543 = sectsz=512 attr=2, projid32bit=1 00:13:58.543 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:58.543 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:58.543 data = bsize=4096 blocks=130560, imaxpct=25 00:13:58.543 = sunit=0 swidth=0 blks 00:13:58.543 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:58.543 log =internal log bsize=4096 blocks=16384, version=2 00:13:58.543 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:58.543 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:59.490 Discarding blocks...Done. 00:13:59.490 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:59.490 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 72071 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:01.383 00:14:01.383 real 0m2.612s 00:14:01.383 user 0m0.020s 00:14:01.383 sys 0m0.050s 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:01.383 ************************************ 00:14:01.383 END TEST filesystem_in_capsule_xfs 00:14:01.383 ************************************ 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:01.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:01.383 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 72071 00:14:01.384 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 72071 ']' 00:14:01.384 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 72071 00:14:01.384 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:14:01.384 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.384 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72071 00:14:01.384 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.384 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.384 killing process with pid 72071 00:14:01.384 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72071' 00:14:01.384 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 72071 00:14:01.384 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 72071 00:14:01.641 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:01.641 00:14:01.641 real 0m13.862s 00:14:01.641 user 0m53.029s 00:14:01.641 sys 0m1.937s 00:14:01.641 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.641 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:01.641 ************************************ 00:14:01.641 END TEST nvmf_filesystem_in_capsule 00:14:01.641 ************************************ 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:01.898 rmmod nvme_tcp 00:14:01.898 rmmod nvme_fabrics 00:14:01.898 rmmod nvme_keyring 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:01.898 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:14:02.155 00:14:02.155 real 0m28.932s 00:14:02.155 user 1m45.849s 00:14:02.155 sys 0m4.535s 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:02.155 ************************************ 00:14:02.155 END TEST nvmf_filesystem 00:14:02.155 ************************************ 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.155 ************************************ 00:14:02.155 START TEST nvmf_target_discovery 00:14:02.155 ************************************ 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:02.155 * Looking for test storage... 00:14:02.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:02.155 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.413 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:02.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.414 --rc genhtml_branch_coverage=1 00:14:02.414 --rc genhtml_function_coverage=1 00:14:02.414 --rc genhtml_legend=1 00:14:02.414 --rc geninfo_all_blocks=1 00:14:02.414 --rc geninfo_unexecuted_blocks=1 00:14:02.414 00:14:02.414 ' 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:02.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.414 --rc genhtml_branch_coverage=1 00:14:02.414 --rc genhtml_function_coverage=1 00:14:02.414 --rc genhtml_legend=1 00:14:02.414 --rc geninfo_all_blocks=1 00:14:02.414 --rc geninfo_unexecuted_blocks=1 00:14:02.414 00:14:02.414 ' 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:02.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.414 --rc genhtml_branch_coverage=1 00:14:02.414 --rc genhtml_function_coverage=1 00:14:02.414 --rc genhtml_legend=1 00:14:02.414 --rc geninfo_all_blocks=1 00:14:02.414 --rc geninfo_unexecuted_blocks=1 00:14:02.414 00:14:02.414 ' 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:02.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.414 --rc genhtml_branch_coverage=1 00:14:02.414 --rc genhtml_function_coverage=1 00:14:02.414 --rc genhtml_legend=1 00:14:02.414 --rc geninfo_all_blocks=1 00:14:02.414 --rc geninfo_unexecuted_blocks=1 00:14:02.414 00:14:02.414 ' 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:02.414 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:02.414 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:02.415 Cannot find device "nvmf_init_br" 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:02.415 Cannot find device "nvmf_init_br2" 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:02.415 Cannot find device "nvmf_tgt_br" 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:02.415 Cannot find device "nvmf_tgt_br2" 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:02.415 Cannot find device "nvmf_init_br" 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:02.415 Cannot find device "nvmf_init_br2" 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:02.415 Cannot find device "nvmf_tgt_br" 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:02.415 Cannot find device "nvmf_tgt_br2" 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:02.415 Cannot find device "nvmf_br" 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:02.415 Cannot find device "nvmf_init_if" 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:02.415 Cannot find device "nvmf_init_if2" 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:02.415 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:02.415 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:14:02.415 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:02.415 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:02.415 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:02.415 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:02.415 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:02.673 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:02.673 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:14:02.673 00:14:02.673 --- 10.0.0.3 ping statistics --- 00:14:02.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.673 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:02.673 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:02.673 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:14:02.673 00:14:02.673 --- 10.0.0.4 ping statistics --- 00:14:02.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.673 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:02.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:14:02.673 00:14:02.673 --- 10.0.0.1 ping statistics --- 00:14:02.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.673 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:02.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:14:02.673 00:14:02.673 --- 10.0.0.2 ping statistics --- 00:14:02.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.673 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=72662 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 72662 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 72662 ']' 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.673 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.931 [2024-12-06 13:07:51.370761] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:14:02.931 [2024-12-06 13:07:51.371115] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.931 [2024-12-06 13:07:51.523310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.188 [2024-12-06 13:07:51.612831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.188 [2024-12-06 13:07:51.612907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.188 [2024-12-06 13:07:51.612923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.188 [2024-12-06 13:07:51.612935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.188 [2024-12-06 13:07:51.612944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.188 [2024-12-06 13:07:51.614531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.188 [2024-12-06 13:07:51.614594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.188 [2024-12-06 13:07:51.614641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.188 [2024-12-06 13:07:51.614648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.134 [2024-12-06 13:07:52.515536] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.134 Null1 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.134 [2024-12-06 13:07:52.559781] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.134 Null2 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.134 Null3 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.134 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.135 Null4 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.135 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -a 10.0.0.3 -s 4420 00:14:04.394 00:14:04.394 Discovery Log Number of Records 6, Generation counter 6 00:14:04.394 =====Discovery Log Entry 0====== 00:14:04.394 trtype: tcp 00:14:04.394 adrfam: ipv4 00:14:04.394 subtype: current discovery subsystem 00:14:04.394 treq: not required 00:14:04.394 portid: 0 00:14:04.394 trsvcid: 4420 00:14:04.394 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:04.394 traddr: 10.0.0.3 00:14:04.394 eflags: explicit discovery connections, duplicate discovery information 00:14:04.394 sectype: none 00:14:04.394 =====Discovery Log Entry 1====== 00:14:04.394 trtype: tcp 00:14:04.394 adrfam: ipv4 00:14:04.394 subtype: nvme subsystem 00:14:04.394 treq: not required 00:14:04.394 portid: 0 00:14:04.394 trsvcid: 4420 00:14:04.394 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:04.394 traddr: 10.0.0.3 00:14:04.394 eflags: none 00:14:04.394 sectype: none 00:14:04.394 =====Discovery Log Entry 2====== 00:14:04.394 trtype: tcp 00:14:04.394 adrfam: ipv4 00:14:04.394 subtype: nvme subsystem 00:14:04.394 treq: not required 00:14:04.394 portid: 0 00:14:04.394 trsvcid: 4420 00:14:04.394 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:04.394 traddr: 10.0.0.3 00:14:04.394 eflags: none 00:14:04.394 sectype: none 00:14:04.394 =====Discovery Log Entry 3====== 00:14:04.394 trtype: tcp 00:14:04.394 adrfam: ipv4 00:14:04.394 subtype: nvme subsystem 00:14:04.394 treq: not required 00:14:04.394 portid: 0 00:14:04.394 trsvcid: 4420 00:14:04.394 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:04.394 traddr: 10.0.0.3 00:14:04.394 eflags: none 00:14:04.394 sectype: none 00:14:04.394 =====Discovery Log Entry 4====== 00:14:04.394 trtype: tcp 00:14:04.394 adrfam: ipv4 00:14:04.394 subtype: nvme subsystem 00:14:04.394 treq: not required 00:14:04.394 portid: 0 00:14:04.394 trsvcid: 4420 00:14:04.394 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:04.394 traddr: 10.0.0.3 00:14:04.394 eflags: none 00:14:04.394 sectype: none 00:14:04.394 =====Discovery Log Entry 5====== 00:14:04.394 trtype: tcp 00:14:04.394 adrfam: ipv4 00:14:04.394 subtype: discovery subsystem referral 00:14:04.394 treq: not required 00:14:04.394 portid: 0 00:14:04.394 trsvcid: 4430 00:14:04.394 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:04.394 traddr: 10.0.0.3 00:14:04.394 eflags: none 00:14:04.394 sectype: none 00:14:04.394 Perform nvmf subsystem discovery via RPC 00:14:04.394 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:04.394 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:04.394 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.394 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.394 [ 00:14:04.394 { 00:14:04.394 "allow_any_host": true, 00:14:04.394 "hosts": [], 00:14:04.394 "listen_addresses": [ 00:14:04.394 { 00:14:04.394 "adrfam": "IPv4", 00:14:04.394 "traddr": "10.0.0.3", 00:14:04.394 "trsvcid": "4420", 00:14:04.394 "trtype": "TCP" 00:14:04.394 } 00:14:04.394 ], 00:14:04.394 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:04.394 "subtype": "Discovery" 00:14:04.394 }, 00:14:04.394 { 00:14:04.394 "allow_any_host": true, 00:14:04.394 "hosts": [], 00:14:04.394 "listen_addresses": [ 00:14:04.394 { 00:14:04.394 "adrfam": "IPv4", 00:14:04.394 "traddr": "10.0.0.3", 00:14:04.394 "trsvcid": "4420", 00:14:04.394 "trtype": "TCP" 00:14:04.394 } 00:14:04.394 ], 00:14:04.394 "max_cntlid": 65519, 00:14:04.394 "max_namespaces": 32, 00:14:04.394 "min_cntlid": 1, 00:14:04.394 "model_number": "SPDK bdev Controller", 00:14:04.394 "namespaces": [ 00:14:04.394 { 00:14:04.394 "bdev_name": "Null1", 00:14:04.394 "name": "Null1", 00:14:04.394 "nguid": "1A08C4A28B014AB3A36376DB23DBDA47", 00:14:04.394 "nsid": 1, 00:14:04.394 "uuid": "1a08c4a2-8b01-4ab3-a363-76db23dbda47" 00:14:04.394 } 00:14:04.394 ], 00:14:04.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.394 "serial_number": "SPDK00000000000001", 00:14:04.394 "subtype": "NVMe" 00:14:04.394 }, 00:14:04.394 { 00:14:04.394 "allow_any_host": true, 00:14:04.394 "hosts": [], 00:14:04.394 "listen_addresses": [ 00:14:04.394 { 00:14:04.394 "adrfam": "IPv4", 00:14:04.394 "traddr": "10.0.0.3", 00:14:04.394 "trsvcid": "4420", 00:14:04.394 "trtype": "TCP" 00:14:04.394 } 00:14:04.394 ], 00:14:04.394 "max_cntlid": 65519, 00:14:04.394 "max_namespaces": 32, 00:14:04.394 "min_cntlid": 1, 00:14:04.394 "model_number": "SPDK bdev Controller", 00:14:04.394 "namespaces": [ 00:14:04.394 { 00:14:04.394 "bdev_name": "Null2", 00:14:04.394 "name": "Null2", 00:14:04.394 "nguid": "BD72F5E74C1B443483E82D196042C2B0", 00:14:04.394 "nsid": 1, 00:14:04.394 "uuid": "bd72f5e7-4c1b-4434-83e8-2d196042c2b0" 00:14:04.394 } 00:14:04.394 ], 00:14:04.394 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:04.394 "serial_number": "SPDK00000000000002", 00:14:04.394 "subtype": "NVMe" 00:14:04.394 }, 00:14:04.394 { 00:14:04.394 "allow_any_host": true, 00:14:04.394 "hosts": [], 00:14:04.394 "listen_addresses": [ 00:14:04.394 { 00:14:04.394 "adrfam": "IPv4", 00:14:04.394 "traddr": "10.0.0.3", 00:14:04.394 "trsvcid": "4420", 00:14:04.394 "trtype": "TCP" 00:14:04.394 } 00:14:04.394 ], 00:14:04.394 "max_cntlid": 65519, 00:14:04.394 "max_namespaces": 32, 00:14:04.395 "min_cntlid": 1, 00:14:04.395 "model_number": "SPDK bdev Controller", 00:14:04.395 "namespaces": [ 00:14:04.395 { 00:14:04.395 "bdev_name": "Null3", 00:14:04.395 "name": "Null3", 00:14:04.395 "nguid": "848C2755783443C8B870AA2123192659", 00:14:04.395 "nsid": 1, 00:14:04.395 "uuid": "848c2755-7834-43c8-b870-aa2123192659" 00:14:04.395 } 00:14:04.395 ], 00:14:04.395 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:04.395 "serial_number": "SPDK00000000000003", 00:14:04.395 "subtype": "NVMe" 00:14:04.395 }, 00:14:04.395 { 00:14:04.395 "allow_any_host": true, 00:14:04.395 "hosts": [], 00:14:04.395 "listen_addresses": [ 00:14:04.395 { 00:14:04.395 "adrfam": "IPv4", 00:14:04.395 "traddr": "10.0.0.3", 00:14:04.395 "trsvcid": "4420", 00:14:04.395 "trtype": "TCP" 00:14:04.395 } 00:14:04.395 ], 00:14:04.395 "max_cntlid": 65519, 00:14:04.395 "max_namespaces": 32, 00:14:04.395 "min_cntlid": 1, 00:14:04.395 "model_number": "SPDK bdev Controller", 00:14:04.395 "namespaces": [ 00:14:04.395 { 00:14:04.395 "bdev_name": "Null4", 00:14:04.395 "name": "Null4", 00:14:04.395 "nguid": "3EAF42AD2DFD4668B3F56BFCD159180D", 00:14:04.395 "nsid": 1, 00:14:04.395 "uuid": "3eaf42ad-2dfd-4668-b3f5-6bfcd159180d" 00:14:04.395 } 00:14:04.395 ], 00:14:04.395 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:04.395 "serial_number": "SPDK00000000000004", 00:14:04.395 "subtype": "NVMe" 00:14:04.395 } 00:14:04.395 ] 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:04.395 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:04.395 rmmod nvme_tcp 00:14:04.395 rmmod nvme_fabrics 00:14:04.395 rmmod nvme_keyring 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 72662 ']' 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 72662 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 72662 ']' 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 72662 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72662 00:14:04.654 killing process with pid 72662 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72662' 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 72662 00:14:04.654 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 72662 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.913 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.172 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:14:05.172 00:14:05.172 real 0m2.888s 00:14:05.173 user 0m7.271s 00:14:05.173 sys 0m0.794s 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.173 ************************************ 00:14:05.173 END TEST nvmf_target_discovery 00:14:05.173 ************************************ 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:05.173 ************************************ 00:14:05.173 START TEST nvmf_referrals 00:14:05.173 ************************************ 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:05.173 * Looking for test storage... 00:14:05.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:05.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.173 --rc genhtml_branch_coverage=1 00:14:05.173 --rc genhtml_function_coverage=1 00:14:05.173 --rc genhtml_legend=1 00:14:05.173 --rc geninfo_all_blocks=1 00:14:05.173 --rc geninfo_unexecuted_blocks=1 00:14:05.173 00:14:05.173 ' 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:05.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.173 --rc genhtml_branch_coverage=1 00:14:05.173 --rc genhtml_function_coverage=1 00:14:05.173 --rc genhtml_legend=1 00:14:05.173 --rc geninfo_all_blocks=1 00:14:05.173 --rc geninfo_unexecuted_blocks=1 00:14:05.173 00:14:05.173 ' 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:05.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.173 --rc genhtml_branch_coverage=1 00:14:05.173 --rc genhtml_function_coverage=1 00:14:05.173 --rc genhtml_legend=1 00:14:05.173 --rc geninfo_all_blocks=1 00:14:05.173 --rc geninfo_unexecuted_blocks=1 00:14:05.173 00:14:05.173 ' 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:05.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.173 --rc genhtml_branch_coverage=1 00:14:05.173 --rc genhtml_function_coverage=1 00:14:05.173 --rc genhtml_legend=1 00:14:05.173 --rc geninfo_all_blocks=1 00:14:05.173 --rc geninfo_unexecuted_blocks=1 00:14:05.173 00:14:05.173 ' 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.173 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.174 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.174 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.174 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:05.433 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:05.433 Cannot find device "nvmf_init_br" 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:05.433 Cannot find device "nvmf_init_br2" 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:05.433 Cannot find device "nvmf_tgt_br" 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:05.433 Cannot find device "nvmf_tgt_br2" 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:05.433 Cannot find device "nvmf_init_br" 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:05.433 Cannot find device "nvmf_init_br2" 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:05.433 Cannot find device "nvmf_tgt_br" 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:05.433 Cannot find device "nvmf_tgt_br2" 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:05.433 Cannot find device "nvmf_br" 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:05.433 Cannot find device "nvmf_init_if" 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:05.433 Cannot find device "nvmf_init_if2" 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:05.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:14:05.433 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:05.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.434 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:14:05.434 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:05.434 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:05.434 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:05.434 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:05.434 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:05.434 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:05.434 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:05.434 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:05.434 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:05.434 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:05.692 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:05.692 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:14:05.692 00:14:05.692 --- 10.0.0.3 ping statistics --- 00:14:05.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.692 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:05.692 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:05.692 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:14:05.692 00:14:05.692 --- 10.0.0.4 ping statistics --- 00:14:05.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.692 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:05.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:05.692 00:14:05.692 --- 10.0.0.1 ping statistics --- 00:14:05.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.692 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:05.692 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:05.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:14:05.692 00:14:05.692 --- 10.0.0.2 ping statistics --- 00:14:05.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.692 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=72943 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 72943 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 72943 ']' 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.693 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:05.693 [2024-12-06 13:07:54.297674] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:14:05.693 [2024-12-06 13:07:54.297777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.950 [2024-12-06 13:07:54.448434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.950 [2024-12-06 13:07:54.521806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.950 [2024-12-06 13:07:54.521882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.950 [2024-12-06 13:07:54.521899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.951 [2024-12-06 13:07:54.521909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.951 [2024-12-06 13:07:54.521919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.951 [2024-12-06 13:07:54.523295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.951 [2024-12-06 13:07:54.523418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.951 [2024-12-06 13:07:54.523484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.951 [2024-12-06 13:07:54.523488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.882 [2024-12-06 13:07:55.386265] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.882 [2024-12-06 13:07:55.398485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:06.882 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.138 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:07.139 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:07.396 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:07.653 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:07.654 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:07.654 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:07.654 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:07.654 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:07.654 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:14:07.654 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:07.654 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:07.654 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:07.654 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:07.654 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:07.654 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:14:07.654 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:14:07.911 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:07.912 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:07.912 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:07.912 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:07.912 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:07.912 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:07.912 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:07.912 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:14:07.912 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:08.169 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:08.426 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:08.426 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:08.426 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:08.426 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:08.426 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:08.426 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:08.426 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:08.426 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:08.426 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:08.427 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:08.427 rmmod nvme_tcp 00:14:08.427 rmmod nvme_fabrics 00:14:08.427 rmmod nvme_keyring 00:14:08.427 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:08.427 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:08.427 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:08.427 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 72943 ']' 00:14:08.427 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 72943 00:14:08.427 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 72943 ']' 00:14:08.427 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 72943 00:14:08.427 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:14:08.427 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.427 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72943 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.685 killing process with pid 72943 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72943' 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 72943 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 72943 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:08.685 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:14:08.943 00:14:08.943 real 0m3.927s 00:14:08.943 user 0m12.209s 00:14:08.943 sys 0m1.061s 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.943 ************************************ 00:14:08.943 END TEST nvmf_referrals 00:14:08.943 ************************************ 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.943 13:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.252 ************************************ 00:14:09.252 START TEST nvmf_connect_disconnect 00:14:09.252 ************************************ 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:09.252 * Looking for test storage... 00:14:09.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:14:09.252 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:09.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.253 --rc genhtml_branch_coverage=1 00:14:09.253 --rc genhtml_function_coverage=1 00:14:09.253 --rc genhtml_legend=1 00:14:09.253 --rc geninfo_all_blocks=1 00:14:09.253 --rc geninfo_unexecuted_blocks=1 00:14:09.253 00:14:09.253 ' 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:09.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.253 --rc genhtml_branch_coverage=1 00:14:09.253 --rc genhtml_function_coverage=1 00:14:09.253 --rc genhtml_legend=1 00:14:09.253 --rc geninfo_all_blocks=1 00:14:09.253 --rc geninfo_unexecuted_blocks=1 00:14:09.253 00:14:09.253 ' 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:09.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.253 --rc genhtml_branch_coverage=1 00:14:09.253 --rc genhtml_function_coverage=1 00:14:09.253 --rc genhtml_legend=1 00:14:09.253 --rc geninfo_all_blocks=1 00:14:09.253 --rc geninfo_unexecuted_blocks=1 00:14:09.253 00:14:09.253 ' 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:09.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.253 --rc genhtml_branch_coverage=1 00:14:09.253 --rc genhtml_function_coverage=1 00:14:09.253 --rc genhtml_legend=1 00:14:09.253 --rc geninfo_all_blocks=1 00:14:09.253 --rc geninfo_unexecuted_blocks=1 00:14:09.253 00:14:09.253 ' 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.253 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.253 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:09.254 Cannot find device "nvmf_init_br" 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:09.254 Cannot find device "nvmf_init_br2" 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:09.254 Cannot find device "nvmf_tgt_br" 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:09.254 Cannot find device "nvmf_tgt_br2" 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:09.254 Cannot find device "nvmf_init_br" 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:09.254 Cannot find device "nvmf_init_br2" 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:09.254 Cannot find device "nvmf_tgt_br" 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:09.254 Cannot find device "nvmf_tgt_br2" 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:14:09.254 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:09.512 Cannot find device "nvmf_br" 00:14:09.512 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:14:09.512 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:09.512 Cannot find device "nvmf_init_if" 00:14:09.512 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:14:09.512 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:09.512 Cannot find device "nvmf_init_if2" 00:14:09.512 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:14:09.512 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:09.512 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:09.512 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:14:09.512 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:09.512 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:09.512 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:14:09.512 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:09.513 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:09.513 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:09.513 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:09.513 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:09.513 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:09.513 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:09.772 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:09.772 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:14:09.772 00:14:09.772 --- 10.0.0.3 ping statistics --- 00:14:09.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.772 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:09.772 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:09.772 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:14:09.772 00:14:09.772 --- 10.0.0.4 ping statistics --- 00:14:09.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.772 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:09.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:14:09.772 00:14:09.772 --- 10.0.0.1 ping statistics --- 00:14:09.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.772 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:09.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:14:09.772 00:14:09.772 --- 10.0.0.2 ping statistics --- 00:14:09.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.772 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=73302 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 73302 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 73302 ']' 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.772 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:09.772 [2024-12-06 13:07:58.295362] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:14:09.772 [2024-12-06 13:07:58.295474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.031 [2024-12-06 13:07:58.441843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.031 [2024-12-06 13:07:58.519161] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.031 [2024-12-06 13:07:58.519216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.031 [2024-12-06 13:07:58.519228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.031 [2024-12-06 13:07:58.519236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.031 [2024-12-06 13:07:58.519244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.031 [2024-12-06 13:07:58.520337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.031 [2024-12-06 13:07:58.520510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.031 [2024-12-06 13:07:58.520563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.031 [2024-12-06 13:07:58.520659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.965 [2024-12-06 13:07:59.406026] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.965 [2024-12-06 13:07:59.476251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:10.965 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:13.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:22.370 rmmod nvme_tcp 00:14:22.370 rmmod nvme_fabrics 00:14:22.370 rmmod nvme_keyring 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 73302 ']' 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 73302 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 73302 ']' 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 73302 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73302 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.370 killing process with pid 73302 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73302' 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 73302 00:14:22.370 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 73302 00:14:22.628 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:22.628 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:22.628 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:22.628 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:22.628 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:14:22.628 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:22.628 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:14:22.628 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:22.628 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:22.629 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:22.629 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:22.629 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:22.629 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:22.629 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:22.629 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:22.629 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:22.629 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:22.629 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:22.886 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:14:22.887 00:14:22.887 real 0m13.815s 00:14:22.887 user 0m49.565s 00:14:22.887 sys 0m2.130s 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:22.887 ************************************ 00:14:22.887 END TEST nvmf_connect_disconnect 00:14:22.887 ************************************ 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:22.887 ************************************ 00:14:22.887 START TEST nvmf_multitarget 00:14:22.887 ************************************ 00:14:22.887 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:22.887 * Looking for test storage... 00:14:23.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:23.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.148 --rc genhtml_branch_coverage=1 00:14:23.148 --rc genhtml_function_coverage=1 00:14:23.148 --rc genhtml_legend=1 00:14:23.148 --rc geninfo_all_blocks=1 00:14:23.148 --rc geninfo_unexecuted_blocks=1 00:14:23.148 00:14:23.148 ' 00:14:23.148 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:23.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.148 --rc genhtml_branch_coverage=1 00:14:23.148 --rc genhtml_function_coverage=1 00:14:23.148 --rc genhtml_legend=1 00:14:23.148 --rc geninfo_all_blocks=1 00:14:23.148 --rc geninfo_unexecuted_blocks=1 00:14:23.149 00:14:23.149 ' 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:23.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.149 --rc genhtml_branch_coverage=1 00:14:23.149 --rc genhtml_function_coverage=1 00:14:23.149 --rc genhtml_legend=1 00:14:23.149 --rc geninfo_all_blocks=1 00:14:23.149 --rc geninfo_unexecuted_blocks=1 00:14:23.149 00:14:23.149 ' 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:23.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.149 --rc genhtml_branch_coverage=1 00:14:23.149 --rc genhtml_function_coverage=1 00:14:23.149 --rc genhtml_legend=1 00:14:23.149 --rc geninfo_all_blocks=1 00:14:23.149 --rc geninfo_unexecuted_blocks=1 00:14:23.149 00:14:23.149 ' 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:23.149 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:23.149 Cannot find device "nvmf_init_br" 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:23.149 Cannot find device "nvmf_init_br2" 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:23.149 Cannot find device "nvmf_tgt_br" 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:14:23.149 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:23.149 Cannot find device "nvmf_tgt_br2" 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:23.150 Cannot find device "nvmf_init_br" 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:23.150 Cannot find device "nvmf_init_br2" 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:23.150 Cannot find device "nvmf_tgt_br" 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:23.150 Cannot find device "nvmf_tgt_br2" 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:23.150 Cannot find device "nvmf_br" 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:23.150 Cannot find device "nvmf_init_if" 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:23.150 Cannot find device "nvmf_init_if2" 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:14:23.150 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:23.408 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:23.408 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:23.408 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:23.408 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:23.408 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:23.408 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:23.408 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:23.408 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:23.408 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:23.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:23.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:14:23.408 00:14:23.408 --- 10.0.0.3 ping statistics --- 00:14:23.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.408 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:14:23.408 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:23.408 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:23.408 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:14:23.408 00:14:23.408 --- 10.0.0.4 ping statistics --- 00:14:23.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.408 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:23.408 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:23.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:23.667 00:14:23.667 --- 10.0.0.1 ping statistics --- 00:14:23.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.667 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:23.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:14:23.667 00:14:23.667 --- 10.0.0.2 ping statistics --- 00:14:23.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.667 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=73763 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 73763 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 73763 ']' 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.667 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:23.667 [2024-12-06 13:08:12.150937] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:14:23.667 [2024-12-06 13:08:12.151045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.667 [2024-12-06 13:08:12.300255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.934 [2024-12-06 13:08:12.362998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.934 [2024-12-06 13:08:12.363285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.934 [2024-12-06 13:08:12.363461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.934 [2024-12-06 13:08:12.363602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.934 [2024-12-06 13:08:12.363657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.934 [2024-12-06 13:08:12.364933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.934 [2024-12-06 13:08:12.365030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.934 [2024-12-06 13:08:12.365121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.934 [2024-12-06 13:08:12.365122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.934 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.934 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:14:23.934 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:23.934 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:23.934 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:23.934 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.934 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:23.934 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:23.934 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:24.191 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:24.191 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:24.191 "nvmf_tgt_1" 00:14:24.448 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:24.448 "nvmf_tgt_2" 00:14:24.448 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:24.448 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:24.705 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:24.705 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:24.705 true 00:14:24.705 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:24.962 true 00:14:24.962 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:24.962 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:24.962 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:24.962 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:24.962 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:24.962 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:24.962 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:25.219 rmmod nvme_tcp 00:14:25.219 rmmod nvme_fabrics 00:14:25.219 rmmod nvme_keyring 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 73763 ']' 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 73763 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 73763 ']' 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 73763 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73763 00:14:25.219 killing process with pid 73763 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73763' 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 73763 00:14:25.219 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 73763 00:14:25.476 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:25.476 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:25.476 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:25.476 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:25.476 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:25.476 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:25.476 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:25.476 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:25.476 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:25.476 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:25.476 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:25.476 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:25.476 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:25.476 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:25.476 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:25.476 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:25.476 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:25.476 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:25.476 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:25.476 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:25.476 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:14:25.768 00:14:25.768 real 0m2.747s 00:14:25.768 user 0m7.549s 00:14:25.768 sys 0m0.786s 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:25.768 ************************************ 00:14:25.768 END TEST nvmf_multitarget 00:14:25.768 ************************************ 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:25.768 ************************************ 00:14:25.768 START TEST nvmf_rpc 00:14:25.768 ************************************ 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:25.768 * Looking for test storage... 00:14:25.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:14:25.768 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:26.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.027 --rc genhtml_branch_coverage=1 00:14:26.027 --rc genhtml_function_coverage=1 00:14:26.027 --rc genhtml_legend=1 00:14:26.027 --rc geninfo_all_blocks=1 00:14:26.027 --rc geninfo_unexecuted_blocks=1 00:14:26.027 00:14:26.027 ' 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:26.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.027 --rc genhtml_branch_coverage=1 00:14:26.027 --rc genhtml_function_coverage=1 00:14:26.027 --rc genhtml_legend=1 00:14:26.027 --rc geninfo_all_blocks=1 00:14:26.027 --rc geninfo_unexecuted_blocks=1 00:14:26.027 00:14:26.027 ' 00:14:26.027 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:26.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.027 --rc genhtml_branch_coverage=1 00:14:26.027 --rc genhtml_function_coverage=1 00:14:26.027 --rc genhtml_legend=1 00:14:26.028 --rc geninfo_all_blocks=1 00:14:26.028 --rc geninfo_unexecuted_blocks=1 00:14:26.028 00:14:26.028 ' 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.028 --rc genhtml_branch_coverage=1 00:14:26.028 --rc genhtml_function_coverage=1 00:14:26.028 --rc genhtml_legend=1 00:14:26.028 --rc geninfo_all_blocks=1 00:14:26.028 --rc geninfo_unexecuted_blocks=1 00:14:26.028 00:14:26.028 ' 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:26.028 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:26.028 Cannot find device "nvmf_init_br" 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:26.028 Cannot find device "nvmf_init_br2" 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:26.028 Cannot find device "nvmf_tgt_br" 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:26.028 Cannot find device "nvmf_tgt_br2" 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:26.028 Cannot find device "nvmf_init_br" 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:26.028 Cannot find device "nvmf_init_br2" 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:14:26.028 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:26.028 Cannot find device "nvmf_tgt_br" 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:26.029 Cannot find device "nvmf_tgt_br2" 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:26.029 Cannot find device "nvmf_br" 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:26.029 Cannot find device "nvmf_init_if" 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:26.029 Cannot find device "nvmf_init_if2" 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:26.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:26.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:26.029 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:26.286 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:26.287 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:26.287 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:14:26.287 00:14:26.287 --- 10.0.0.3 ping statistics --- 00:14:26.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.287 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:26.287 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:26.287 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:14:26.287 00:14:26.287 --- 10.0.0.4 ping statistics --- 00:14:26.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.287 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:26.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:26.287 00:14:26.287 --- 10.0.0.1 ping statistics --- 00:14:26.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.287 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:26.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:14:26.287 00:14:26.287 --- 10.0.0.2 ping statistics --- 00:14:26.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.287 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=74036 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 74036 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 74036 ']' 00:14:26.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.287 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.544 [2024-12-06 13:08:14.985803] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:14:26.545 [2024-12-06 13:08:14.986734] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.545 [2024-12-06 13:08:15.141150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.801 [2024-12-06 13:08:15.212040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.801 [2024-12-06 13:08:15.212332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.801 [2024-12-06 13:08:15.212358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.801 [2024-12-06 13:08:15.212371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.801 [2024-12-06 13:08:15.212381] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.801 [2024-12-06 13:08:15.213657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.801 [2024-12-06 13:08:15.213705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.801 [2024-12-06 13:08:15.213848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.801 [2024-12-06 13:08:15.213857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:26.801 "poll_groups": [ 00:14:26.801 { 00:14:26.801 "admin_qpairs": 0, 00:14:26.801 "completed_nvme_io": 0, 00:14:26.801 "current_admin_qpairs": 0, 00:14:26.801 "current_io_qpairs": 0, 00:14:26.801 "io_qpairs": 0, 00:14:26.801 "name": "nvmf_tgt_poll_group_000", 00:14:26.801 "pending_bdev_io": 0, 00:14:26.801 "transports": [] 00:14:26.801 }, 00:14:26.801 { 00:14:26.801 "admin_qpairs": 0, 00:14:26.801 "completed_nvme_io": 0, 00:14:26.801 "current_admin_qpairs": 0, 00:14:26.801 "current_io_qpairs": 0, 00:14:26.801 "io_qpairs": 0, 00:14:26.801 "name": "nvmf_tgt_poll_group_001", 00:14:26.801 "pending_bdev_io": 0, 00:14:26.801 "transports": [] 00:14:26.801 }, 00:14:26.801 { 00:14:26.801 "admin_qpairs": 0, 00:14:26.801 "completed_nvme_io": 0, 00:14:26.801 "current_admin_qpairs": 0, 00:14:26.801 "current_io_qpairs": 0, 00:14:26.801 "io_qpairs": 0, 00:14:26.801 "name": "nvmf_tgt_poll_group_002", 00:14:26.801 "pending_bdev_io": 0, 00:14:26.801 "transports": [] 00:14:26.801 }, 00:14:26.801 { 00:14:26.801 "admin_qpairs": 0, 00:14:26.801 "completed_nvme_io": 0, 00:14:26.801 "current_admin_qpairs": 0, 00:14:26.801 "current_io_qpairs": 0, 00:14:26.801 "io_qpairs": 0, 00:14:26.801 "name": "nvmf_tgt_poll_group_003", 00:14:26.801 "pending_bdev_io": 0, 00:14:26.801 "transports": [] 00:14:26.801 } 00:14:26.801 ], 00:14:26.801 "tick_rate": 2200000000 00:14:26.801 }' 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:26.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:27.058 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:27.058 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:27.058 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:27.058 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:27.058 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.058 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 [2024-12-06 13:08:15.521504] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.058 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.058 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:27.058 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.058 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.058 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:27.058 "poll_groups": [ 00:14:27.058 { 00:14:27.058 "admin_qpairs": 0, 00:14:27.058 "completed_nvme_io": 0, 00:14:27.058 "current_admin_qpairs": 0, 00:14:27.058 "current_io_qpairs": 0, 00:14:27.058 "io_qpairs": 0, 00:14:27.058 "name": "nvmf_tgt_poll_group_000", 00:14:27.058 "pending_bdev_io": 0, 00:14:27.058 "transports": [ 00:14:27.058 { 00:14:27.058 "trtype": "TCP" 00:14:27.058 } 00:14:27.058 ] 00:14:27.058 }, 00:14:27.058 { 00:14:27.058 "admin_qpairs": 0, 00:14:27.058 "completed_nvme_io": 0, 00:14:27.058 "current_admin_qpairs": 0, 00:14:27.058 "current_io_qpairs": 0, 00:14:27.058 "io_qpairs": 0, 00:14:27.058 "name": "nvmf_tgt_poll_group_001", 00:14:27.058 "pending_bdev_io": 0, 00:14:27.058 "transports": [ 00:14:27.058 { 00:14:27.058 "trtype": "TCP" 00:14:27.058 } 00:14:27.058 ] 00:14:27.058 }, 00:14:27.058 { 00:14:27.058 "admin_qpairs": 0, 00:14:27.058 "completed_nvme_io": 0, 00:14:27.058 "current_admin_qpairs": 0, 00:14:27.058 "current_io_qpairs": 0, 00:14:27.058 "io_qpairs": 0, 00:14:27.058 "name": "nvmf_tgt_poll_group_002", 00:14:27.058 "pending_bdev_io": 0, 00:14:27.058 "transports": [ 00:14:27.058 { 00:14:27.058 "trtype": "TCP" 00:14:27.058 } 00:14:27.058 ] 00:14:27.058 }, 00:14:27.058 { 00:14:27.058 "admin_qpairs": 0, 00:14:27.058 "completed_nvme_io": 0, 00:14:27.058 "current_admin_qpairs": 0, 00:14:27.059 "current_io_qpairs": 0, 00:14:27.059 "io_qpairs": 0, 00:14:27.059 "name": "nvmf_tgt_poll_group_003", 00:14:27.059 "pending_bdev_io": 0, 00:14:27.059 "transports": [ 00:14:27.059 { 00:14:27.059 "trtype": "TCP" 00:14:27.059 } 00:14:27.059 ] 00:14:27.059 } 00:14:27.059 ], 00:14:27.059 "tick_rate": 2200000000 00:14:27.059 }' 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.059 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.316 Malloc1 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.316 [2024-12-06 13:08:15.729599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -a 10.0.0.3 -s 4420 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -a 10.0.0.3 -s 4420 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -a 10.0.0.3 -s 4420 00:14:27.316 [2024-12-06 13:08:15.758068] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8' 00:14:27.316 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:27.316 could not add new controller: failed to write to nvme-fabrics device 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:27.316 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:29.860 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:29.860 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.860 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:29.860 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:29.860 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.860 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:29.860 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.860 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:29.860 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:29.860 [2024-12-06 13:08:18.059111] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8' 00:14:29.860 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:29.860 could not add new controller: failed to write to nvme-fabrics device 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:29.860 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:31.760 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:31.760 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:31.760 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:31.760 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:31.760 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:31.760 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:31.760 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:31.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.760 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:31.760 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:31.760 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:31.760 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.760 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.761 [2024-12-06 13:08:20.372901] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.761 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:32.019 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:32.019 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:32.019 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.019 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:32.019 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.543 [2024-12-06 13:08:22.787909] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.543 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:34.544 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.544 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.544 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.544 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:34.544 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.544 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.544 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.544 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:34.544 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:34.544 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:34.544 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:34.544 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:34.544 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:36.444 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:36.444 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:36.444 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:36.444 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:36.444 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:36.444 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:36.444 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:36.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.444 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:36.444 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:36.444 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:36.444 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.444 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:36.444 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.444 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:36.444 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:36.444 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.444 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.700 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.700 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.700 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.700 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.700 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.700 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:36.700 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.701 [2024-12-06 13:08:25.111187] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:36.701 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:39.300 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:39.300 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:39.300 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:39.300 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:39.300 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:39.300 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:39.300 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.300 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:39.300 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:39.300 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.301 [2024-12-06 13:08:27.438364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:39.301 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:41.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.197 [2024-12-06 13:08:29.741708] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:41.454 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:41.454 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:41.454 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:41.454 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:41.455 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:43.353 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:43.353 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:43.353 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:43.353 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:43.354 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:43.354 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:43.354 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.613 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:43.613 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:43.613 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:43.613 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.613 [2024-12-06 13:08:32.061037] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.613 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 [2024-12-06 13:08:32.109051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 [2024-12-06 13:08:32.157132] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 [2024-12-06 13:08:32.205127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:43.615 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:43.615 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.615 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.615 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.615 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:43.615 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.615 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.873 [2024-12-06 13:08:32.253214] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:43.873 "poll_groups": [ 00:14:43.873 { 00:14:43.873 "admin_qpairs": 2, 00:14:43.873 "completed_nvme_io": 66, 00:14:43.873 "current_admin_qpairs": 0, 00:14:43.873 "current_io_qpairs": 0, 00:14:43.873 "io_qpairs": 16, 00:14:43.873 "name": "nvmf_tgt_poll_group_000", 00:14:43.873 "pending_bdev_io": 0, 00:14:43.873 "transports": [ 00:14:43.873 { 00:14:43.873 "trtype": "TCP" 00:14:43.873 } 00:14:43.873 ] 00:14:43.873 }, 00:14:43.873 { 00:14:43.873 "admin_qpairs": 3, 00:14:43.873 "completed_nvme_io": 66, 00:14:43.873 "current_admin_qpairs": 0, 00:14:43.873 "current_io_qpairs": 0, 00:14:43.873 "io_qpairs": 17, 00:14:43.873 "name": "nvmf_tgt_poll_group_001", 00:14:43.873 "pending_bdev_io": 0, 00:14:43.873 "transports": [ 00:14:43.873 { 00:14:43.873 "trtype": "TCP" 00:14:43.873 } 00:14:43.873 ] 00:14:43.873 }, 00:14:43.873 { 00:14:43.873 "admin_qpairs": 1, 00:14:43.873 "completed_nvme_io": 119, 00:14:43.873 "current_admin_qpairs": 0, 00:14:43.873 "current_io_qpairs": 0, 00:14:43.873 "io_qpairs": 19, 00:14:43.873 "name": "nvmf_tgt_poll_group_002", 00:14:43.873 "pending_bdev_io": 0, 00:14:43.873 "transports": [ 00:14:43.873 { 00:14:43.873 "trtype": "TCP" 00:14:43.873 } 00:14:43.873 ] 00:14:43.873 }, 00:14:43.873 { 00:14:43.873 "admin_qpairs": 1, 00:14:43.873 "completed_nvme_io": 169, 00:14:43.873 "current_admin_qpairs": 0, 00:14:43.873 "current_io_qpairs": 0, 00:14:43.873 "io_qpairs": 18, 00:14:43.873 "name": "nvmf_tgt_poll_group_003", 00:14:43.873 "pending_bdev_io": 0, 00:14:43.873 "transports": [ 00:14:43.873 { 00:14:43.873 "trtype": "TCP" 00:14:43.873 } 00:14:43.873 ] 00:14:43.873 } 00:14:43.873 ], 00:14:43.873 "tick_rate": 2200000000 00:14:43.873 }' 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:43.873 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:43.874 rmmod nvme_tcp 00:14:43.874 rmmod nvme_fabrics 00:14:43.874 rmmod nvme_keyring 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 74036 ']' 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 74036 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 74036 ']' 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 74036 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.874 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74036 00:14:44.132 killing process with pid 74036 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74036' 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 74036 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 74036 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:44.132 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:14:44.390 00:14:44.390 real 0m18.739s 00:14:44.390 user 1m8.790s 00:14:44.390 sys 0m2.746s 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.390 ************************************ 00:14:44.390 END TEST nvmf_rpc 00:14:44.390 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.390 ************************************ 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:44.650 ************************************ 00:14:44.650 START TEST nvmf_invalid 00:14:44.650 ************************************ 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:44.650 * Looking for test storage... 00:14:44.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:44.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.650 --rc genhtml_branch_coverage=1 00:14:44.650 --rc genhtml_function_coverage=1 00:14:44.650 --rc genhtml_legend=1 00:14:44.650 --rc geninfo_all_blocks=1 00:14:44.650 --rc geninfo_unexecuted_blocks=1 00:14:44.650 00:14:44.650 ' 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:44.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.650 --rc genhtml_branch_coverage=1 00:14:44.650 --rc genhtml_function_coverage=1 00:14:44.650 --rc genhtml_legend=1 00:14:44.650 --rc geninfo_all_blocks=1 00:14:44.650 --rc geninfo_unexecuted_blocks=1 00:14:44.650 00:14:44.650 ' 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:44.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.650 --rc genhtml_branch_coverage=1 00:14:44.650 --rc genhtml_function_coverage=1 00:14:44.650 --rc genhtml_legend=1 00:14:44.650 --rc geninfo_all_blocks=1 00:14:44.650 --rc geninfo_unexecuted_blocks=1 00:14:44.650 00:14:44.650 ' 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:44.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.650 --rc genhtml_branch_coverage=1 00:14:44.650 --rc genhtml_function_coverage=1 00:14:44.650 --rc genhtml_legend=1 00:14:44.650 --rc geninfo_all_blocks=1 00:14:44.650 --rc geninfo_unexecuted_blocks=1 00:14:44.650 00:14:44.650 ' 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.650 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:44.651 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:44.651 Cannot find device "nvmf_init_br" 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:44.651 Cannot find device "nvmf_init_br2" 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:14:44.651 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:44.910 Cannot find device "nvmf_tgt_br" 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:44.910 Cannot find device "nvmf_tgt_br2" 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:44.910 Cannot find device "nvmf_init_br" 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:44.910 Cannot find device "nvmf_init_br2" 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:44.910 Cannot find device "nvmf_tgt_br" 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:44.910 Cannot find device "nvmf_tgt_br2" 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:44.910 Cannot find device "nvmf_br" 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:44.910 Cannot find device "nvmf_init_if" 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:44.910 Cannot find device "nvmf_init_if2" 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:44.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:44.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:44.910 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:44.911 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:44.911 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:44.911 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:44.911 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:45.169 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:45.169 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:14:45.169 00:14:45.169 --- 10.0.0.3 ping statistics --- 00:14:45.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.169 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:45.169 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:45.169 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:14:45.169 00:14:45.169 --- 10.0.0.4 ping statistics --- 00:14:45.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.169 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:45.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:45.169 00:14:45.169 --- 10.0.0.1 ping statistics --- 00:14:45.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.169 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:45.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:14:45.169 00:14:45.169 --- 10.0.0.2 ping statistics --- 00:14:45.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.169 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=74588 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 74588 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 74588 ']' 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.169 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.170 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.170 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:45.170 [2024-12-06 13:08:33.766990] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:14:45.170 [2024-12-06 13:08:33.767294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.448 [2024-12-06 13:08:33.923198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.448 [2024-12-06 13:08:34.005744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.448 [2024-12-06 13:08:34.006094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.448 [2024-12-06 13:08:34.006121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.448 [2024-12-06 13:08:34.006134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.448 [2024-12-06 13:08:34.006144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.448 [2024-12-06 13:08:34.007691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.448 [2024-12-06 13:08:34.007827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.448 [2024-12-06 13:08:34.007967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.448 [2024-12-06 13:08:34.008028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.706 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.706 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:45.706 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:45.706 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:45.706 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:45.706 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.706 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:45.706 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode32640 00:14:45.964 [2024-12-06 13:08:34.460375] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:45.964 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/12/06 13:08:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode32640 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:14:45.964 request: 00:14:45.964 { 00:14:45.964 "method": "nvmf_create_subsystem", 00:14:45.964 "params": { 00:14:45.964 "nqn": "nqn.2016-06.io.spdk:cnode32640", 00:14:45.964 "tgt_name": "foobar" 00:14:45.964 } 00:14:45.964 } 00:14:45.964 Got JSON-RPC error response 00:14:45.964 GoRPCClient: error on JSON-RPC call' 00:14:45.964 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/12/06 13:08:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode32640 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:14:45.964 request: 00:14:45.964 { 00:14:45.964 "method": "nvmf_create_subsystem", 00:14:45.964 "params": { 00:14:45.964 "nqn": "nqn.2016-06.io.spdk:cnode32640", 00:14:45.964 "tgt_name": "foobar" 00:14:45.964 } 00:14:45.964 } 00:14:45.964 Got JSON-RPC error response 00:14:45.964 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:45.964 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:45.964 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3884 00:14:46.221 [2024-12-06 13:08:34.740648] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3884: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:46.221 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/12/06 13:08:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3884 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:14:46.221 request: 00:14:46.221 { 00:14:46.221 "method": "nvmf_create_subsystem", 00:14:46.221 "params": { 00:14:46.221 "nqn": "nqn.2016-06.io.spdk:cnode3884", 00:14:46.221 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:14:46.221 } 00:14:46.221 } 00:14:46.221 Got JSON-RPC error response 00:14:46.221 GoRPCClient: error on JSON-RPC call' 00:14:46.221 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/12/06 13:08:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3884 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:14:46.221 request: 00:14:46.221 { 00:14:46.221 "method": "nvmf_create_subsystem", 00:14:46.221 "params": { 00:14:46.221 "nqn": "nqn.2016-06.io.spdk:cnode3884", 00:14:46.221 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:14:46.221 } 00:14:46.221 } 00:14:46.221 Got JSON-RPC error response 00:14:46.221 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:46.221 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:46.221 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9359 00:14:46.480 [2024-12-06 13:08:35.108954] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9359: invalid model number 'SPDK_Controller' 00:14:46.738 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/12/06 13:08:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode9359], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:14:46.738 request: 00:14:46.738 { 00:14:46.738 "method": "nvmf_create_subsystem", 00:14:46.738 "params": { 00:14:46.738 "nqn": "nqn.2016-06.io.spdk:cnode9359", 00:14:46.738 "model_number": "SPDK_Controller\u001f" 00:14:46.738 } 00:14:46.738 } 00:14:46.738 Got JSON-RPC error response 00:14:46.738 GoRPCClient: error on JSON-RPC call' 00:14:46.738 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/12/06 13:08:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode9359], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:14:46.738 request: 00:14:46.738 { 00:14:46.738 "method": "nvmf_create_subsystem", 00:14:46.738 "params": { 00:14:46.738 "nqn": "nqn.2016-06.io.spdk:cnode9359", 00:14:46.738 "model_number": "SPDK_Controller\u001f" 00:14:46.738 } 00:14:46.738 } 00:14:46.738 Got JSON-RPC error response 00:14:46.739 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.739 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ . == \- ]] 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '.NSYlgHw.e|z`]Q+3r#~3' 00:14:46.740 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '.NSYlgHw.e|z`]Q+3r#~3' nqn.2016-06.io.spdk:cnode26667 00:14:46.999 [2024-12-06 13:08:35.489291] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26667: invalid serial number '.NSYlgHw.e|z`]Q+3r#~3' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/12/06 13:08:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26667 serial_number:.NSYlgHw.e|z`]Q+3r#~3], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN .NSYlgHw.e|z`]Q+3r#~3 00:14:46.999 request: 00:14:46.999 { 00:14:46.999 "method": "nvmf_create_subsystem", 00:14:46.999 "params": { 00:14:46.999 "nqn": "nqn.2016-06.io.spdk:cnode26667", 00:14:46.999 "serial_number": ".NSYlgHw.e|z`]Q+3r#~3" 00:14:46.999 } 00:14:46.999 } 00:14:46.999 Got JSON-RPC error response 00:14:46.999 GoRPCClient: error on JSON-RPC call' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/12/06 13:08:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26667 serial_number:.NSYlgHw.e|z`]Q+3r#~3], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN .NSYlgHw.e|z`]Q+3r#~3 00:14:46.999 request: 00:14:46.999 { 00:14:46.999 "method": "nvmf_create_subsystem", 00:14:46.999 "params": { 00:14:46.999 "nqn": "nqn.2016-06.io.spdk:cnode26667", 00:14:46.999 "serial_number": ".NSYlgHw.e|z`]Q+3r#~3" 00:14:46.999 } 00:14:46.999 } 00:14:46.999 Got JSON-RPC error response 00:14:46.999 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:46.999 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.000 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ } == \- ]] 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '})H@6"+Z+"4:Re!fZs"es`+ $NBAyln:s5~.C8x"G' 00:14:47.260 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '})H@6"+Z+"4:Re!fZs"es`+ $NBAyln:s5~.C8x"G' nqn.2016-06.io.spdk:cnode21820 00:14:47.519 [2024-12-06 13:08:35.969684] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21820: invalid model number '})H@6"+Z+"4:Re!fZs"es`+ $NBAyln:s5~.C8x"G' 00:14:47.519 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/12/06 13:08:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:})H@6"+Z+"4:Re!fZs"es`+ $NBAyln:s5~.C8x"G nqn:nqn.2016-06.io.spdk:cnode21820], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN })H@6"+Z+"4:Re!fZs"es`+ $NBAyln:s5~.C8x"G 00:14:47.519 request: 00:14:47.519 { 00:14:47.519 "method": "nvmf_create_subsystem", 00:14:47.519 "params": { 00:14:47.519 "nqn": "nqn.2016-06.io.spdk:cnode21820", 00:14:47.519 "model_number": "})H@6\"+Z+\"4:Re!fZs\"es`+ $NBAyln:s5~.C8x\"G" 00:14:47.519 } 00:14:47.519 } 00:14:47.519 Got JSON-RPC error response 00:14:47.519 GoRPCClient: error on JSON-RPC call' 00:14:47.519 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/12/06 13:08:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:})H@6"+Z+"4:Re!fZs"es`+ $NBAyln:s5~.C8x"G nqn:nqn.2016-06.io.spdk:cnode21820], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN })H@6"+Z+"4:Re!fZs"es`+ $NBAyln:s5~.C8x"G 00:14:47.519 request: 00:14:47.519 { 00:14:47.519 "method": "nvmf_create_subsystem", 00:14:47.519 "params": { 00:14:47.519 "nqn": "nqn.2016-06.io.spdk:cnode21820", 00:14:47.519 "model_number": "})H@6\"+Z+\"4:Re!fZs\"es`+ $NBAyln:s5~.C8x\"G" 00:14:47.519 } 00:14:47.519 } 00:14:47.519 Got JSON-RPC error response 00:14:47.519 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:47.519 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:47.777 [2024-12-06 13:08:36.237952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.777 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:48.036 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:48.036 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:48.036 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:48.036 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:48.036 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:48.294 [2024-12-06 13:08:36.786416] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:48.294 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/12/06 13:08:36 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:14:48.294 request: 00:14:48.294 { 00:14:48.294 "method": "nvmf_subsystem_remove_listener", 00:14:48.294 "params": { 00:14:48.294 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:48.294 "listen_address": { 00:14:48.294 "trtype": "tcp", 00:14:48.294 "traddr": "", 00:14:48.294 "trsvcid": "4421" 00:14:48.294 } 00:14:48.294 } 00:14:48.294 } 00:14:48.294 Got JSON-RPC error response 00:14:48.294 GoRPCClient: error on JSON-RPC call' 00:14:48.294 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/12/06 13:08:36 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:14:48.294 request: 00:14:48.294 { 00:14:48.294 "method": "nvmf_subsystem_remove_listener", 00:14:48.294 "params": { 00:14:48.294 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:48.294 "listen_address": { 00:14:48.294 "trtype": "tcp", 00:14:48.294 "traddr": "", 00:14:48.294 "trsvcid": "4421" 00:14:48.294 } 00:14:48.294 } 00:14:48.294 } 00:14:48.294 Got JSON-RPC error response 00:14:48.294 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:48.294 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5595 -i 0 00:14:48.551 [2024-12-06 13:08:37.110663] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5595: invalid cntlid range [0-65519] 00:14:48.552 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/12/06 13:08:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode5595], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:14:48.552 request: 00:14:48.552 { 00:14:48.552 "method": "nvmf_create_subsystem", 00:14:48.552 "params": { 00:14:48.552 "nqn": "nqn.2016-06.io.spdk:cnode5595", 00:14:48.552 "min_cntlid": 0 00:14:48.552 } 00:14:48.552 } 00:14:48.552 Got JSON-RPC error response 00:14:48.552 GoRPCClient: error on JSON-RPC call' 00:14:48.552 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/12/06 13:08:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode5595], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:14:48.552 request: 00:14:48.552 { 00:14:48.552 "method": "nvmf_create_subsystem", 00:14:48.552 "params": { 00:14:48.552 "nqn": "nqn.2016-06.io.spdk:cnode5595", 00:14:48.552 "min_cntlid": 0 00:14:48.552 } 00:14:48.552 } 00:14:48.552 Got JSON-RPC error response 00:14:48.552 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:48.552 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17839 -i 65520 00:14:48.809 [2024-12-06 13:08:37.374909] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17839: invalid cntlid range [65520-65519] 00:14:48.809 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/12/06 13:08:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode17839], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:14:48.809 request: 00:14:48.809 { 00:14:48.809 "method": "nvmf_create_subsystem", 00:14:48.809 "params": { 00:14:48.809 "nqn": "nqn.2016-06.io.spdk:cnode17839", 00:14:48.809 "min_cntlid": 65520 00:14:48.809 } 00:14:48.809 } 00:14:48.809 Got JSON-RPC error response 00:14:48.809 GoRPCClient: error on JSON-RPC call' 00:14:48.809 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/12/06 13:08:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode17839], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:14:48.809 request: 00:14:48.809 { 00:14:48.809 "method": "nvmf_create_subsystem", 00:14:48.809 "params": { 00:14:48.809 "nqn": "nqn.2016-06.io.spdk:cnode17839", 00:14:48.809 "min_cntlid": 65520 00:14:48.809 } 00:14:48.809 } 00:14:48.809 Got JSON-RPC error response 00:14:48.809 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:48.809 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12444 -I 0 00:14:49.115 [2024-12-06 13:08:37.659169] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12444: invalid cntlid range [1-0] 00:14:49.115 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/12/06 13:08:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12444], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:14:49.115 request: 00:14:49.115 { 00:14:49.115 "method": "nvmf_create_subsystem", 00:14:49.115 "params": { 00:14:49.115 "nqn": "nqn.2016-06.io.spdk:cnode12444", 00:14:49.115 "max_cntlid": 0 00:14:49.115 } 00:14:49.115 } 00:14:49.115 Got JSON-RPC error response 00:14:49.115 GoRPCClient: error on JSON-RPC call' 00:14:49.115 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/12/06 13:08:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12444], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:14:49.115 request: 00:14:49.115 { 00:14:49.115 "method": "nvmf_create_subsystem", 00:14:49.115 "params": { 00:14:49.115 "nqn": "nqn.2016-06.io.spdk:cnode12444", 00:14:49.115 "max_cntlid": 0 00:14:49.115 } 00:14:49.115 } 00:14:49.115 Got JSON-RPC error response 00:14:49.115 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:49.116 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7810 -I 65520 00:14:49.374 [2024-12-06 13:08:37.976161] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7810: invalid cntlid range [1-65520] 00:14:49.374 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/12/06 13:08:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode7810], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:14:49.374 request: 00:14:49.374 { 00:14:49.374 "method": "nvmf_create_subsystem", 00:14:49.374 "params": { 00:14:49.374 "nqn": "nqn.2016-06.io.spdk:cnode7810", 00:14:49.374 "max_cntlid": 65520 00:14:49.374 } 00:14:49.374 } 00:14:49.374 Got JSON-RPC error response 00:14:49.374 GoRPCClient: error on JSON-RPC call' 00:14:49.374 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/12/06 13:08:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode7810], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:14:49.374 request: 00:14:49.374 { 00:14:49.374 "method": "nvmf_create_subsystem", 00:14:49.374 "params": { 00:14:49.374 "nqn": "nqn.2016-06.io.spdk:cnode7810", 00:14:49.374 "max_cntlid": 65520 00:14:49.374 } 00:14:49.374 } 00:14:49.374 Got JSON-RPC error response 00:14:49.374 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:49.374 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11254 -i 6 -I 5 00:14:49.632 [2024-12-06 13:08:38.256509] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11254: invalid cntlid range [6-5] 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/12/06 13:08:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode11254], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:14:49.891 request: 00:14:49.891 { 00:14:49.891 "method": "nvmf_create_subsystem", 00:14:49.891 "params": { 00:14:49.891 "nqn": "nqn.2016-06.io.spdk:cnode11254", 00:14:49.891 "min_cntlid": 6, 00:14:49.891 "max_cntlid": 5 00:14:49.891 } 00:14:49.891 } 00:14:49.891 Got JSON-RPC error response 00:14:49.891 GoRPCClient: error on JSON-RPC call' 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/12/06 13:08:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode11254], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:14:49.891 request: 00:14:49.891 { 00:14:49.891 "method": "nvmf_create_subsystem", 00:14:49.891 "params": { 00:14:49.891 "nqn": "nqn.2016-06.io.spdk:cnode11254", 00:14:49.891 "min_cntlid": 6, 00:14:49.891 "max_cntlid": 5 00:14:49.891 } 00:14:49.891 } 00:14:49.891 Got JSON-RPC error response 00:14:49.891 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:49.891 { 00:14:49.891 "name": "foobar", 00:14:49.891 "method": "nvmf_delete_target", 00:14:49.891 "req_id": 1 00:14:49.891 } 00:14:49.891 Got JSON-RPC error response 00:14:49.891 response: 00:14:49.891 { 00:14:49.891 "code": -32602, 00:14:49.891 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:49.891 }' 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:49.891 { 00:14:49.891 "name": "foobar", 00:14:49.891 "method": "nvmf_delete_target", 00:14:49.891 "req_id": 1 00:14:49.891 } 00:14:49.891 Got JSON-RPC error response 00:14:49.891 response: 00:14:49.891 { 00:14:49.891 "code": -32602, 00:14:49.891 "message": "The specified target doesn't exist, cannot delete it." 00:14:49.891 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:49.891 rmmod nvme_tcp 00:14:49.891 rmmod nvme_fabrics 00:14:49.891 rmmod nvme_keyring 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 74588 ']' 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 74588 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 74588 ']' 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 74588 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.891 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74588 00:14:50.149 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.149 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.150 killing process with pid 74588 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74588' 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 74588 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 74588 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:50.150 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.408 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:14:50.408 00:14:50.408 real 0m5.964s 00:14:50.408 user 0m22.590s 00:14:50.408 sys 0m1.377s 00:14:50.408 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.408 ************************************ 00:14:50.408 END TEST nvmf_invalid 00:14:50.408 ************************************ 00:14:50.408 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:50.408 13:08:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:50.408 13:08:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:50.408 13:08:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.408 13:08:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.668 ************************************ 00:14:50.668 START TEST nvmf_connect_stress 00:14:50.668 ************************************ 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:50.668 * Looking for test storage... 00:14:50.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:50.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.668 --rc genhtml_branch_coverage=1 00:14:50.668 --rc genhtml_function_coverage=1 00:14:50.668 --rc genhtml_legend=1 00:14:50.668 --rc geninfo_all_blocks=1 00:14:50.668 --rc geninfo_unexecuted_blocks=1 00:14:50.668 00:14:50.668 ' 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:50.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.668 --rc genhtml_branch_coverage=1 00:14:50.668 --rc genhtml_function_coverage=1 00:14:50.668 --rc genhtml_legend=1 00:14:50.668 --rc geninfo_all_blocks=1 00:14:50.668 --rc geninfo_unexecuted_blocks=1 00:14:50.668 00:14:50.668 ' 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:50.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.668 --rc genhtml_branch_coverage=1 00:14:50.668 --rc genhtml_function_coverage=1 00:14:50.668 --rc genhtml_legend=1 00:14:50.668 --rc geninfo_all_blocks=1 00:14:50.668 --rc geninfo_unexecuted_blocks=1 00:14:50.668 00:14:50.668 ' 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:50.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.668 --rc genhtml_branch_coverage=1 00:14:50.668 --rc genhtml_function_coverage=1 00:14:50.668 --rc genhtml_legend=1 00:14:50.668 --rc geninfo_all_blocks=1 00:14:50.668 --rc geninfo_unexecuted_blocks=1 00:14:50.668 00:14:50.668 ' 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.668 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.669 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:50.669 Cannot find device "nvmf_init_br" 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:50.669 Cannot find device "nvmf_init_br2" 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:50.669 Cannot find device "nvmf_tgt_br" 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:14:50.669 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:50.928 Cannot find device "nvmf_tgt_br2" 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:50.928 Cannot find device "nvmf_init_br" 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:50.928 Cannot find device "nvmf_init_br2" 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:50.928 Cannot find device "nvmf_tgt_br" 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:50.928 Cannot find device "nvmf_tgt_br2" 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:50.928 Cannot find device "nvmf_br" 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:50.928 Cannot find device "nvmf_init_if" 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:50.928 Cannot find device "nvmf_init_if2" 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:50.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:50.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:50.928 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:51.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:14:51.188 00:14:51.188 --- 10.0.0.3 ping statistics --- 00:14:51.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.188 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:51.188 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:51.188 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:14:51.188 00:14:51.188 --- 10.0.0.4 ping statistics --- 00:14:51.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.188 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:51.188 00:14:51.188 --- 10.0.0.1 ping statistics --- 00:14:51.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.188 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:51.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:14:51.188 00:14:51.188 --- 10.0.0.2 ping statistics --- 00:14:51.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.188 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=75139 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 75139 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 75139 ']' 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.188 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.188 [2024-12-06 13:08:39.699541] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:14:51.188 [2024-12-06 13:08:39.699657] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.447 [2024-12-06 13:08:39.845077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:51.447 [2024-12-06 13:08:39.909061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.447 [2024-12-06 13:08:39.909122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.447 [2024-12-06 13:08:39.909133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.447 [2024-12-06 13:08:39.909142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.447 [2024-12-06 13:08:39.909150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.447 [2024-12-06 13:08:39.910272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.447 [2024-12-06 13:08:39.910378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.447 [2024-12-06 13:08:39.910384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.377 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.377 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.378 [2024-12-06 13:08:40.781221] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.378 [2024-12-06 13:08:40.803091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.378 NULL1 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=75191 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.378 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.637 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.637 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:52.637 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.637 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.637 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.899 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.899 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:52.899 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.899 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.899 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.463 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.463 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:53.463 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.463 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.463 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.720 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.720 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:53.720 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.720 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.720 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.976 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.976 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:53.976 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.976 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.976 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.233 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.233 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:54.233 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.233 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.233 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.796 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.796 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:54.796 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.796 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.796 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.053 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.053 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:55.053 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.053 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.053 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.310 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.310 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:55.310 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.310 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.310 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.567 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.567 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:55.567 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.567 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.567 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.824 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.824 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:55.824 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.824 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.824 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.403 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.403 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:56.403 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.403 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.403 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.674 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.674 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:56.674 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.674 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.674 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.932 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.932 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:56.932 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.932 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.932 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.189 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.189 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:57.189 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.189 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.189 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.448 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.448 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:57.448 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.448 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.448 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.013 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.013 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:58.013 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.013 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.013 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.270 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.270 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:58.270 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.270 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.270 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.527 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.527 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:58.527 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.527 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.527 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.785 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.785 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:58.785 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.785 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.785 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.042 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.042 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:59.042 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.042 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.042 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.608 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.608 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:59.608 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.608 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.608 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.866 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.867 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:14:59.867 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.867 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.867 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.124 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.124 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:15:00.124 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.124 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.124 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.383 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.383 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:15:00.383 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.383 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.383 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.642 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.642 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:15:00.642 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.642 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.642 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.207 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.207 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:15:01.207 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.207 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.207 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.464 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.464 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:15:01.464 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.464 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.464 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:15:01.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.980 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.980 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:15:01.980 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.980 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.980 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.237 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.237 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:15:02.237 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.237 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.237 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.496 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:02.753 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75191 00:15:02.754 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (75191) - No such process 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 75191 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:02.754 rmmod nvme_tcp 00:15:02.754 rmmod nvme_fabrics 00:15:02.754 rmmod nvme_keyring 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 75139 ']' 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 75139 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 75139 ']' 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 75139 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75139 00:15:02.754 killing process with pid 75139 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75139' 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 75139 00:15:02.754 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 75139 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:03.012 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:15:03.270 00:15:03.270 real 0m12.714s 00:15:03.270 user 0m41.324s 00:15:03.270 sys 0m3.509s 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.270 ************************************ 00:15:03.270 END TEST nvmf_connect_stress 00:15:03.270 ************************************ 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:03.270 ************************************ 00:15:03.270 START TEST nvmf_fused_ordering 00:15:03.270 ************************************ 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:03.270 * Looking for test storage... 00:15:03.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:15:03.270 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:03.528 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:03.528 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:03.528 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:03.528 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:03.528 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:03.528 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:03.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.529 --rc genhtml_branch_coverage=1 00:15:03.529 --rc genhtml_function_coverage=1 00:15:03.529 --rc genhtml_legend=1 00:15:03.529 --rc geninfo_all_blocks=1 00:15:03.529 --rc geninfo_unexecuted_blocks=1 00:15:03.529 00:15:03.529 ' 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:03.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.529 --rc genhtml_branch_coverage=1 00:15:03.529 --rc genhtml_function_coverage=1 00:15:03.529 --rc genhtml_legend=1 00:15:03.529 --rc geninfo_all_blocks=1 00:15:03.529 --rc geninfo_unexecuted_blocks=1 00:15:03.529 00:15:03.529 ' 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:03.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.529 --rc genhtml_branch_coverage=1 00:15:03.529 --rc genhtml_function_coverage=1 00:15:03.529 --rc genhtml_legend=1 00:15:03.529 --rc geninfo_all_blocks=1 00:15:03.529 --rc geninfo_unexecuted_blocks=1 00:15:03.529 00:15:03.529 ' 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:03.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.529 --rc genhtml_branch_coverage=1 00:15:03.529 --rc genhtml_function_coverage=1 00:15:03.529 --rc genhtml_legend=1 00:15:03.529 --rc geninfo_all_blocks=1 00:15:03.529 --rc geninfo_unexecuted_blocks=1 00:15:03.529 00:15:03.529 ' 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:03.529 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:03.529 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:03.530 Cannot find device "nvmf_init_br" 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:03.530 Cannot find device "nvmf_init_br2" 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:03.530 Cannot find device "nvmf_tgt_br" 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:03.530 Cannot find device "nvmf_tgt_br2" 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:03.530 Cannot find device "nvmf_init_br" 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:03.530 Cannot find device "nvmf_init_br2" 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:03.530 Cannot find device "nvmf_tgt_br" 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:03.530 Cannot find device "nvmf_tgt_br2" 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:03.530 Cannot find device "nvmf_br" 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:03.530 Cannot find device "nvmf_init_if" 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:15:03.530 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:03.788 Cannot find device "nvmf_init_if2" 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:03.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:03.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:03.788 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:04.047 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:04.047 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:15:04.047 00:15:04.047 --- 10.0.0.3 ping statistics --- 00:15:04.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.047 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:04.047 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:04.047 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:15:04.047 00:15:04.047 --- 10.0.0.4 ping statistics --- 00:15:04.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.047 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:04.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:15:04.047 00:15:04.047 --- 10.0.0.1 ping statistics --- 00:15:04.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.047 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:04.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:15:04.047 00:15:04.047 --- 10.0.0.2 ping statistics --- 00:15:04.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.047 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=75567 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 75567 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 75567 ']' 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.047 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.047 [2024-12-06 13:08:52.541083] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:15:04.047 [2024-12-06 13:08:52.541197] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.305 [2024-12-06 13:08:52.695514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.305 [2024-12-06 13:08:52.761381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.305 [2024-12-06 13:08:52.761689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.305 [2024-12-06 13:08:52.761733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.305 [2024-12-06 13:08:52.761745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.305 [2024-12-06 13:08:52.761753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.305 [2024-12-06 13:08:52.762247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.238 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.238 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:05.238 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.239 [2024-12-06 13:08:53.580864] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.239 [2024-12-06 13:08:53.605059] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.239 NULL1 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.239 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:05.239 [2024-12-06 13:08:53.671131] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:15:05.239 [2024-12-06 13:08:53.671399] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75623 ] 00:15:05.496 Attached to nqn.2016-06.io.spdk:cnode1 00:15:05.496 Namespace ID: 1 size: 1GB 00:15:05.496 fused_ordering(0) 00:15:05.496 fused_ordering(1) 00:15:05.496 fused_ordering(2) 00:15:05.496 fused_ordering(3) 00:15:05.496 fused_ordering(4) 00:15:05.496 fused_ordering(5) 00:15:05.496 fused_ordering(6) 00:15:05.496 fused_ordering(7) 00:15:05.496 fused_ordering(8) 00:15:05.496 fused_ordering(9) 00:15:05.496 fused_ordering(10) 00:15:05.496 fused_ordering(11) 00:15:05.496 fused_ordering(12) 00:15:05.496 fused_ordering(13) 00:15:05.496 fused_ordering(14) 00:15:05.496 fused_ordering(15) 00:15:05.496 fused_ordering(16) 00:15:05.496 fused_ordering(17) 00:15:05.496 fused_ordering(18) 00:15:05.496 fused_ordering(19) 00:15:05.496 fused_ordering(20) 00:15:05.496 fused_ordering(21) 00:15:05.496 fused_ordering(22) 00:15:05.496 fused_ordering(23) 00:15:05.496 fused_ordering(24) 00:15:05.496 fused_ordering(25) 00:15:05.496 fused_ordering(26) 00:15:05.496 fused_ordering(27) 00:15:05.496 fused_ordering(28) 00:15:05.496 fused_ordering(29) 00:15:05.496 fused_ordering(30) 00:15:05.496 fused_ordering(31) 00:15:05.496 fused_ordering(32) 00:15:05.496 fused_ordering(33) 00:15:05.496 fused_ordering(34) 00:15:05.496 fused_ordering(35) 00:15:05.496 fused_ordering(36) 00:15:05.496 fused_ordering(37) 00:15:05.496 fused_ordering(38) 00:15:05.496 fused_ordering(39) 00:15:05.496 fused_ordering(40) 00:15:05.496 fused_ordering(41) 00:15:05.496 fused_ordering(42) 00:15:05.496 fused_ordering(43) 00:15:05.496 fused_ordering(44) 00:15:05.496 fused_ordering(45) 00:15:05.496 fused_ordering(46) 00:15:05.496 fused_ordering(47) 00:15:05.496 fused_ordering(48) 00:15:05.496 fused_ordering(49) 00:15:05.496 fused_ordering(50) 00:15:05.496 fused_ordering(51) 00:15:05.496 fused_ordering(52) 00:15:05.496 fused_ordering(53) 00:15:05.496 fused_ordering(54) 00:15:05.496 fused_ordering(55) 00:15:05.496 fused_ordering(56) 00:15:05.496 fused_ordering(57) 00:15:05.496 fused_ordering(58) 00:15:05.496 fused_ordering(59) 00:15:05.496 fused_ordering(60) 00:15:05.496 fused_ordering(61) 00:15:05.496 fused_ordering(62) 00:15:05.496 fused_ordering(63) 00:15:05.496 fused_ordering(64) 00:15:05.497 fused_ordering(65) 00:15:05.497 fused_ordering(66) 00:15:05.497 fused_ordering(67) 00:15:05.497 fused_ordering(68) 00:15:05.497 fused_ordering(69) 00:15:05.497 fused_ordering(70) 00:15:05.497 fused_ordering(71) 00:15:05.497 fused_ordering(72) 00:15:05.497 fused_ordering(73) 00:15:05.497 fused_ordering(74) 00:15:05.497 fused_ordering(75) 00:15:05.497 fused_ordering(76) 00:15:05.497 fused_ordering(77) 00:15:05.497 fused_ordering(78) 00:15:05.497 fused_ordering(79) 00:15:05.497 fused_ordering(80) 00:15:05.497 fused_ordering(81) 00:15:05.497 fused_ordering(82) 00:15:05.497 fused_ordering(83) 00:15:05.497 fused_ordering(84) 00:15:05.497 fused_ordering(85) 00:15:05.497 fused_ordering(86) 00:15:05.497 fused_ordering(87) 00:15:05.497 fused_ordering(88) 00:15:05.497 fused_ordering(89) 00:15:05.497 fused_ordering(90) 00:15:05.497 fused_ordering(91) 00:15:05.497 fused_ordering(92) 00:15:05.497 fused_ordering(93) 00:15:05.497 fused_ordering(94) 00:15:05.497 fused_ordering(95) 00:15:05.497 fused_ordering(96) 00:15:05.497 fused_ordering(97) 00:15:05.497 fused_ordering(98) 00:15:05.497 fused_ordering(99) 00:15:05.497 fused_ordering(100) 00:15:05.497 fused_ordering(101) 00:15:05.497 fused_ordering(102) 00:15:05.497 fused_ordering(103) 00:15:05.497 fused_ordering(104) 00:15:05.497 fused_ordering(105) 00:15:05.497 fused_ordering(106) 00:15:05.497 fused_ordering(107) 00:15:05.497 fused_ordering(108) 00:15:05.497 fused_ordering(109) 00:15:05.497 fused_ordering(110) 00:15:05.497 fused_ordering(111) 00:15:05.497 fused_ordering(112) 00:15:05.497 fused_ordering(113) 00:15:05.497 fused_ordering(114) 00:15:05.497 fused_ordering(115) 00:15:05.497 fused_ordering(116) 00:15:05.497 fused_ordering(117) 00:15:05.497 fused_ordering(118) 00:15:05.497 fused_ordering(119) 00:15:05.497 fused_ordering(120) 00:15:05.497 fused_ordering(121) 00:15:05.497 fused_ordering(122) 00:15:05.497 fused_ordering(123) 00:15:05.497 fused_ordering(124) 00:15:05.497 fused_ordering(125) 00:15:05.497 fused_ordering(126) 00:15:05.497 fused_ordering(127) 00:15:05.497 fused_ordering(128) 00:15:05.497 fused_ordering(129) 00:15:05.497 fused_ordering(130) 00:15:05.497 fused_ordering(131) 00:15:05.497 fused_ordering(132) 00:15:05.497 fused_ordering(133) 00:15:05.497 fused_ordering(134) 00:15:05.497 fused_ordering(135) 00:15:05.497 fused_ordering(136) 00:15:05.497 fused_ordering(137) 00:15:05.497 fused_ordering(138) 00:15:05.497 fused_ordering(139) 00:15:05.497 fused_ordering(140) 00:15:05.497 fused_ordering(141) 00:15:05.497 fused_ordering(142) 00:15:05.497 fused_ordering(143) 00:15:05.497 fused_ordering(144) 00:15:05.497 fused_ordering(145) 00:15:05.497 fused_ordering(146) 00:15:05.497 fused_ordering(147) 00:15:05.497 fused_ordering(148) 00:15:05.497 fused_ordering(149) 00:15:05.497 fused_ordering(150) 00:15:05.497 fused_ordering(151) 00:15:05.497 fused_ordering(152) 00:15:05.497 fused_ordering(153) 00:15:05.497 fused_ordering(154) 00:15:05.497 fused_ordering(155) 00:15:05.497 fused_ordering(156) 00:15:05.497 fused_ordering(157) 00:15:05.497 fused_ordering(158) 00:15:05.497 fused_ordering(159) 00:15:05.497 fused_ordering(160) 00:15:05.497 fused_ordering(161) 00:15:05.497 fused_ordering(162) 00:15:05.497 fused_ordering(163) 00:15:05.497 fused_ordering(164) 00:15:05.497 fused_ordering(165) 00:15:05.497 fused_ordering(166) 00:15:05.497 fused_ordering(167) 00:15:05.497 fused_ordering(168) 00:15:05.497 fused_ordering(169) 00:15:05.497 fused_ordering(170) 00:15:05.497 fused_ordering(171) 00:15:05.497 fused_ordering(172) 00:15:05.497 fused_ordering(173) 00:15:05.497 fused_ordering(174) 00:15:05.497 fused_ordering(175) 00:15:05.497 fused_ordering(176) 00:15:05.497 fused_ordering(177) 00:15:05.497 fused_ordering(178) 00:15:05.497 fused_ordering(179) 00:15:05.497 fused_ordering(180) 00:15:05.497 fused_ordering(181) 00:15:05.497 fused_ordering(182) 00:15:05.497 fused_ordering(183) 00:15:05.497 fused_ordering(184) 00:15:05.497 fused_ordering(185) 00:15:05.497 fused_ordering(186) 00:15:05.497 fused_ordering(187) 00:15:05.497 fused_ordering(188) 00:15:05.497 fused_ordering(189) 00:15:05.497 fused_ordering(190) 00:15:05.497 fused_ordering(191) 00:15:05.497 fused_ordering(192) 00:15:05.497 fused_ordering(193) 00:15:05.497 fused_ordering(194) 00:15:05.497 fused_ordering(195) 00:15:05.497 fused_ordering(196) 00:15:05.497 fused_ordering(197) 00:15:05.497 fused_ordering(198) 00:15:05.497 fused_ordering(199) 00:15:05.497 fused_ordering(200) 00:15:05.497 fused_ordering(201) 00:15:05.497 fused_ordering(202) 00:15:05.497 fused_ordering(203) 00:15:05.497 fused_ordering(204) 00:15:05.497 fused_ordering(205) 00:15:05.754 fused_ordering(206) 00:15:05.754 fused_ordering(207) 00:15:05.754 fused_ordering(208) 00:15:05.754 fused_ordering(209) 00:15:05.754 fused_ordering(210) 00:15:05.754 fused_ordering(211) 00:15:05.754 fused_ordering(212) 00:15:05.754 fused_ordering(213) 00:15:05.754 fused_ordering(214) 00:15:05.754 fused_ordering(215) 00:15:05.754 fused_ordering(216) 00:15:05.754 fused_ordering(217) 00:15:05.754 fused_ordering(218) 00:15:05.754 fused_ordering(219) 00:15:05.754 fused_ordering(220) 00:15:05.754 fused_ordering(221) 00:15:05.754 fused_ordering(222) 00:15:05.754 fused_ordering(223) 00:15:05.754 fused_ordering(224) 00:15:05.754 fused_ordering(225) 00:15:05.754 fused_ordering(226) 00:15:05.754 fused_ordering(227) 00:15:05.754 fused_ordering(228) 00:15:05.754 fused_ordering(229) 00:15:05.754 fused_ordering(230) 00:15:05.754 fused_ordering(231) 00:15:05.754 fused_ordering(232) 00:15:05.754 fused_ordering(233) 00:15:05.754 fused_ordering(234) 00:15:05.754 fused_ordering(235) 00:15:05.754 fused_ordering(236) 00:15:05.754 fused_ordering(237) 00:15:05.754 fused_ordering(238) 00:15:05.754 fused_ordering(239) 00:15:05.754 fused_ordering(240) 00:15:05.754 fused_ordering(241) 00:15:05.754 fused_ordering(242) 00:15:05.754 fused_ordering(243) 00:15:05.754 fused_ordering(244) 00:15:05.754 fused_ordering(245) 00:15:05.754 fused_ordering(246) 00:15:05.754 fused_ordering(247) 00:15:05.754 fused_ordering(248) 00:15:05.754 fused_ordering(249) 00:15:05.754 fused_ordering(250) 00:15:05.754 fused_ordering(251) 00:15:05.754 fused_ordering(252) 00:15:05.754 fused_ordering(253) 00:15:05.754 fused_ordering(254) 00:15:05.754 fused_ordering(255) 00:15:05.754 fused_ordering(256) 00:15:05.754 fused_ordering(257) 00:15:05.754 fused_ordering(258) 00:15:05.754 fused_ordering(259) 00:15:05.754 fused_ordering(260) 00:15:05.754 fused_ordering(261) 00:15:05.754 fused_ordering(262) 00:15:05.754 fused_ordering(263) 00:15:05.754 fused_ordering(264) 00:15:05.754 fused_ordering(265) 00:15:05.754 fused_ordering(266) 00:15:05.754 fused_ordering(267) 00:15:05.754 fused_ordering(268) 00:15:05.754 fused_ordering(269) 00:15:05.754 fused_ordering(270) 00:15:05.754 fused_ordering(271) 00:15:05.754 fused_ordering(272) 00:15:05.754 fused_ordering(273) 00:15:05.754 fused_ordering(274) 00:15:05.754 fused_ordering(275) 00:15:05.754 fused_ordering(276) 00:15:05.754 fused_ordering(277) 00:15:05.754 fused_ordering(278) 00:15:05.754 fused_ordering(279) 00:15:05.754 fused_ordering(280) 00:15:05.754 fused_ordering(281) 00:15:05.754 fused_ordering(282) 00:15:05.754 fused_ordering(283) 00:15:05.754 fused_ordering(284) 00:15:05.754 fused_ordering(285) 00:15:05.754 fused_ordering(286) 00:15:05.754 fused_ordering(287) 00:15:05.754 fused_ordering(288) 00:15:05.754 fused_ordering(289) 00:15:05.754 fused_ordering(290) 00:15:05.754 fused_ordering(291) 00:15:05.754 fused_ordering(292) 00:15:05.754 fused_ordering(293) 00:15:05.754 fused_ordering(294) 00:15:05.754 fused_ordering(295) 00:15:05.754 fused_ordering(296) 00:15:05.754 fused_ordering(297) 00:15:05.754 fused_ordering(298) 00:15:05.754 fused_ordering(299) 00:15:05.754 fused_ordering(300) 00:15:05.754 fused_ordering(301) 00:15:05.754 fused_ordering(302) 00:15:05.754 fused_ordering(303) 00:15:05.754 fused_ordering(304) 00:15:05.754 fused_ordering(305) 00:15:05.754 fused_ordering(306) 00:15:05.754 fused_ordering(307) 00:15:05.754 fused_ordering(308) 00:15:05.754 fused_ordering(309) 00:15:05.754 fused_ordering(310) 00:15:05.754 fused_ordering(311) 00:15:05.754 fused_ordering(312) 00:15:05.754 fused_ordering(313) 00:15:05.754 fused_ordering(314) 00:15:05.754 fused_ordering(315) 00:15:05.754 fused_ordering(316) 00:15:05.755 fused_ordering(317) 00:15:05.755 fused_ordering(318) 00:15:05.755 fused_ordering(319) 00:15:05.755 fused_ordering(320) 00:15:05.755 fused_ordering(321) 00:15:05.755 fused_ordering(322) 00:15:05.755 fused_ordering(323) 00:15:05.755 fused_ordering(324) 00:15:05.755 fused_ordering(325) 00:15:05.755 fused_ordering(326) 00:15:05.755 fused_ordering(327) 00:15:05.755 fused_ordering(328) 00:15:05.755 fused_ordering(329) 00:15:05.755 fused_ordering(330) 00:15:05.755 fused_ordering(331) 00:15:05.755 fused_ordering(332) 00:15:05.755 fused_ordering(333) 00:15:05.755 fused_ordering(334) 00:15:05.755 fused_ordering(335) 00:15:05.755 fused_ordering(336) 00:15:05.755 fused_ordering(337) 00:15:05.755 fused_ordering(338) 00:15:05.755 fused_ordering(339) 00:15:05.755 fused_ordering(340) 00:15:05.755 fused_ordering(341) 00:15:05.755 fused_ordering(342) 00:15:05.755 fused_ordering(343) 00:15:05.755 fused_ordering(344) 00:15:05.755 fused_ordering(345) 00:15:05.755 fused_ordering(346) 00:15:05.755 fused_ordering(347) 00:15:05.755 fused_ordering(348) 00:15:05.755 fused_ordering(349) 00:15:05.755 fused_ordering(350) 00:15:05.755 fused_ordering(351) 00:15:05.755 fused_ordering(352) 00:15:05.755 fused_ordering(353) 00:15:05.755 fused_ordering(354) 00:15:05.755 fused_ordering(355) 00:15:05.755 fused_ordering(356) 00:15:05.755 fused_ordering(357) 00:15:05.755 fused_ordering(358) 00:15:05.755 fused_ordering(359) 00:15:05.755 fused_ordering(360) 00:15:05.755 fused_ordering(361) 00:15:05.755 fused_ordering(362) 00:15:05.755 fused_ordering(363) 00:15:05.755 fused_ordering(364) 00:15:05.755 fused_ordering(365) 00:15:05.755 fused_ordering(366) 00:15:05.755 fused_ordering(367) 00:15:05.755 fused_ordering(368) 00:15:05.755 fused_ordering(369) 00:15:05.755 fused_ordering(370) 00:15:05.755 fused_ordering(371) 00:15:05.755 fused_ordering(372) 00:15:05.755 fused_ordering(373) 00:15:05.755 fused_ordering(374) 00:15:05.755 fused_ordering(375) 00:15:05.755 fused_ordering(376) 00:15:05.755 fused_ordering(377) 00:15:05.755 fused_ordering(378) 00:15:05.755 fused_ordering(379) 00:15:05.755 fused_ordering(380) 00:15:05.755 fused_ordering(381) 00:15:05.755 fused_ordering(382) 00:15:05.755 fused_ordering(383) 00:15:05.755 fused_ordering(384) 00:15:05.755 fused_ordering(385) 00:15:05.755 fused_ordering(386) 00:15:05.755 fused_ordering(387) 00:15:05.755 fused_ordering(388) 00:15:05.755 fused_ordering(389) 00:15:05.755 fused_ordering(390) 00:15:05.755 fused_ordering(391) 00:15:05.755 fused_ordering(392) 00:15:05.755 fused_ordering(393) 00:15:05.755 fused_ordering(394) 00:15:05.755 fused_ordering(395) 00:15:05.755 fused_ordering(396) 00:15:05.755 fused_ordering(397) 00:15:05.755 fused_ordering(398) 00:15:05.755 fused_ordering(399) 00:15:05.755 fused_ordering(400) 00:15:05.755 fused_ordering(401) 00:15:05.755 fused_ordering(402) 00:15:05.755 fused_ordering(403) 00:15:05.755 fused_ordering(404) 00:15:05.755 fused_ordering(405) 00:15:05.755 fused_ordering(406) 00:15:05.755 fused_ordering(407) 00:15:05.755 fused_ordering(408) 00:15:05.755 fused_ordering(409) 00:15:05.755 fused_ordering(410) 00:15:06.318 fused_ordering(411) 00:15:06.318 fused_ordering(412) 00:15:06.318 fused_ordering(413) 00:15:06.318 fused_ordering(414) 00:15:06.318 fused_ordering(415) 00:15:06.318 fused_ordering(416) 00:15:06.318 fused_ordering(417) 00:15:06.318 fused_ordering(418) 00:15:06.318 fused_ordering(419) 00:15:06.318 fused_ordering(420) 00:15:06.318 fused_ordering(421) 00:15:06.318 fused_ordering(422) 00:15:06.318 fused_ordering(423) 00:15:06.318 fused_ordering(424) 00:15:06.318 fused_ordering(425) 00:15:06.318 fused_ordering(426) 00:15:06.318 fused_ordering(427) 00:15:06.318 fused_ordering(428) 00:15:06.318 fused_ordering(429) 00:15:06.318 fused_ordering(430) 00:15:06.318 fused_ordering(431) 00:15:06.318 fused_ordering(432) 00:15:06.318 fused_ordering(433) 00:15:06.318 fused_ordering(434) 00:15:06.318 fused_ordering(435) 00:15:06.318 fused_ordering(436) 00:15:06.318 fused_ordering(437) 00:15:06.318 fused_ordering(438) 00:15:06.318 fused_ordering(439) 00:15:06.318 fused_ordering(440) 00:15:06.318 fused_ordering(441) 00:15:06.318 fused_ordering(442) 00:15:06.318 fused_ordering(443) 00:15:06.318 fused_ordering(444) 00:15:06.318 fused_ordering(445) 00:15:06.318 fused_ordering(446) 00:15:06.318 fused_ordering(447) 00:15:06.318 fused_ordering(448) 00:15:06.318 fused_ordering(449) 00:15:06.318 fused_ordering(450) 00:15:06.318 fused_ordering(451) 00:15:06.318 fused_ordering(452) 00:15:06.318 fused_ordering(453) 00:15:06.318 fused_ordering(454) 00:15:06.318 fused_ordering(455) 00:15:06.318 fused_ordering(456) 00:15:06.318 fused_ordering(457) 00:15:06.318 fused_ordering(458) 00:15:06.318 fused_ordering(459) 00:15:06.318 fused_ordering(460) 00:15:06.318 fused_ordering(461) 00:15:06.318 fused_ordering(462) 00:15:06.318 fused_ordering(463) 00:15:06.318 fused_ordering(464) 00:15:06.318 fused_ordering(465) 00:15:06.318 fused_ordering(466) 00:15:06.318 fused_ordering(467) 00:15:06.318 fused_ordering(468) 00:15:06.318 fused_ordering(469) 00:15:06.318 fused_ordering(470) 00:15:06.318 fused_ordering(471) 00:15:06.318 fused_ordering(472) 00:15:06.318 fused_ordering(473) 00:15:06.318 fused_ordering(474) 00:15:06.318 fused_ordering(475) 00:15:06.318 fused_ordering(476) 00:15:06.318 fused_ordering(477) 00:15:06.318 fused_ordering(478) 00:15:06.318 fused_ordering(479) 00:15:06.318 fused_ordering(480) 00:15:06.318 fused_ordering(481) 00:15:06.318 fused_ordering(482) 00:15:06.318 fused_ordering(483) 00:15:06.318 fused_ordering(484) 00:15:06.318 fused_ordering(485) 00:15:06.318 fused_ordering(486) 00:15:06.318 fused_ordering(487) 00:15:06.318 fused_ordering(488) 00:15:06.318 fused_ordering(489) 00:15:06.318 fused_ordering(490) 00:15:06.318 fused_ordering(491) 00:15:06.318 fused_ordering(492) 00:15:06.318 fused_ordering(493) 00:15:06.318 fused_ordering(494) 00:15:06.318 fused_ordering(495) 00:15:06.318 fused_ordering(496) 00:15:06.318 fused_ordering(497) 00:15:06.318 fused_ordering(498) 00:15:06.318 fused_ordering(499) 00:15:06.318 fused_ordering(500) 00:15:06.318 fused_ordering(501) 00:15:06.318 fused_ordering(502) 00:15:06.318 fused_ordering(503) 00:15:06.318 fused_ordering(504) 00:15:06.318 fused_ordering(505) 00:15:06.318 fused_ordering(506) 00:15:06.318 fused_ordering(507) 00:15:06.318 fused_ordering(508) 00:15:06.318 fused_ordering(509) 00:15:06.318 fused_ordering(510) 00:15:06.318 fused_ordering(511) 00:15:06.318 fused_ordering(512) 00:15:06.318 fused_ordering(513) 00:15:06.318 fused_ordering(514) 00:15:06.318 fused_ordering(515) 00:15:06.319 fused_ordering(516) 00:15:06.319 fused_ordering(517) 00:15:06.319 fused_ordering(518) 00:15:06.319 fused_ordering(519) 00:15:06.319 fused_ordering(520) 00:15:06.319 fused_ordering(521) 00:15:06.319 fused_ordering(522) 00:15:06.319 fused_ordering(523) 00:15:06.319 fused_ordering(524) 00:15:06.319 fused_ordering(525) 00:15:06.319 fused_ordering(526) 00:15:06.319 fused_ordering(527) 00:15:06.319 fused_ordering(528) 00:15:06.319 fused_ordering(529) 00:15:06.319 fused_ordering(530) 00:15:06.319 fused_ordering(531) 00:15:06.319 fused_ordering(532) 00:15:06.319 fused_ordering(533) 00:15:06.319 fused_ordering(534) 00:15:06.319 fused_ordering(535) 00:15:06.319 fused_ordering(536) 00:15:06.319 fused_ordering(537) 00:15:06.319 fused_ordering(538) 00:15:06.319 fused_ordering(539) 00:15:06.319 fused_ordering(540) 00:15:06.319 fused_ordering(541) 00:15:06.319 fused_ordering(542) 00:15:06.319 fused_ordering(543) 00:15:06.319 fused_ordering(544) 00:15:06.319 fused_ordering(545) 00:15:06.319 fused_ordering(546) 00:15:06.319 fused_ordering(547) 00:15:06.319 fused_ordering(548) 00:15:06.319 fused_ordering(549) 00:15:06.319 fused_ordering(550) 00:15:06.319 fused_ordering(551) 00:15:06.319 fused_ordering(552) 00:15:06.319 fused_ordering(553) 00:15:06.319 fused_ordering(554) 00:15:06.319 fused_ordering(555) 00:15:06.319 fused_ordering(556) 00:15:06.319 fused_ordering(557) 00:15:06.319 fused_ordering(558) 00:15:06.319 fused_ordering(559) 00:15:06.319 fused_ordering(560) 00:15:06.319 fused_ordering(561) 00:15:06.319 fused_ordering(562) 00:15:06.319 fused_ordering(563) 00:15:06.319 fused_ordering(564) 00:15:06.319 fused_ordering(565) 00:15:06.319 fused_ordering(566) 00:15:06.319 fused_ordering(567) 00:15:06.319 fused_ordering(568) 00:15:06.319 fused_ordering(569) 00:15:06.319 fused_ordering(570) 00:15:06.319 fused_ordering(571) 00:15:06.319 fused_ordering(572) 00:15:06.319 fused_ordering(573) 00:15:06.319 fused_ordering(574) 00:15:06.319 fused_ordering(575) 00:15:06.319 fused_ordering(576) 00:15:06.319 fused_ordering(577) 00:15:06.319 fused_ordering(578) 00:15:06.319 fused_ordering(579) 00:15:06.319 fused_ordering(580) 00:15:06.319 fused_ordering(581) 00:15:06.319 fused_ordering(582) 00:15:06.319 fused_ordering(583) 00:15:06.319 fused_ordering(584) 00:15:06.319 fused_ordering(585) 00:15:06.319 fused_ordering(586) 00:15:06.319 fused_ordering(587) 00:15:06.319 fused_ordering(588) 00:15:06.319 fused_ordering(589) 00:15:06.319 fused_ordering(590) 00:15:06.319 fused_ordering(591) 00:15:06.319 fused_ordering(592) 00:15:06.319 fused_ordering(593) 00:15:06.319 fused_ordering(594) 00:15:06.319 fused_ordering(595) 00:15:06.319 fused_ordering(596) 00:15:06.319 fused_ordering(597) 00:15:06.319 fused_ordering(598) 00:15:06.319 fused_ordering(599) 00:15:06.319 fused_ordering(600) 00:15:06.319 fused_ordering(601) 00:15:06.319 fused_ordering(602) 00:15:06.319 fused_ordering(603) 00:15:06.319 fused_ordering(604) 00:15:06.319 fused_ordering(605) 00:15:06.319 fused_ordering(606) 00:15:06.319 fused_ordering(607) 00:15:06.319 fused_ordering(608) 00:15:06.319 fused_ordering(609) 00:15:06.319 fused_ordering(610) 00:15:06.319 fused_ordering(611) 00:15:06.319 fused_ordering(612) 00:15:06.319 fused_ordering(613) 00:15:06.319 fused_ordering(614) 00:15:06.319 fused_ordering(615) 00:15:06.884 fused_ordering(616) 00:15:06.884 fused_ordering(617) 00:15:06.884 fused_ordering(618) 00:15:06.884 fused_ordering(619) 00:15:06.884 fused_ordering(620) 00:15:06.884 fused_ordering(621) 00:15:06.884 fused_ordering(622) 00:15:06.884 fused_ordering(623) 00:15:06.884 fused_ordering(624) 00:15:06.884 fused_ordering(625) 00:15:06.884 fused_ordering(626) 00:15:06.884 fused_ordering(627) 00:15:06.884 fused_ordering(628) 00:15:06.884 fused_ordering(629) 00:15:06.884 fused_ordering(630) 00:15:06.884 fused_ordering(631) 00:15:06.884 fused_ordering(632) 00:15:06.884 fused_ordering(633) 00:15:06.884 fused_ordering(634) 00:15:06.884 fused_ordering(635) 00:15:06.884 fused_ordering(636) 00:15:06.884 fused_ordering(637) 00:15:06.884 fused_ordering(638) 00:15:06.884 fused_ordering(639) 00:15:06.884 fused_ordering(640) 00:15:06.884 fused_ordering(641) 00:15:06.884 fused_ordering(642) 00:15:06.884 fused_ordering(643) 00:15:06.884 fused_ordering(644) 00:15:06.884 fused_ordering(645) 00:15:06.884 fused_ordering(646) 00:15:06.884 fused_ordering(647) 00:15:06.884 fused_ordering(648) 00:15:06.884 fused_ordering(649) 00:15:06.884 fused_ordering(650) 00:15:06.884 fused_ordering(651) 00:15:06.884 fused_ordering(652) 00:15:06.884 fused_ordering(653) 00:15:06.884 fused_ordering(654) 00:15:06.884 fused_ordering(655) 00:15:06.884 fused_ordering(656) 00:15:06.884 fused_ordering(657) 00:15:06.884 fused_ordering(658) 00:15:06.884 fused_ordering(659) 00:15:06.884 fused_ordering(660) 00:15:06.884 fused_ordering(661) 00:15:06.884 fused_ordering(662) 00:15:06.884 fused_ordering(663) 00:15:06.884 fused_ordering(664) 00:15:06.884 fused_ordering(665) 00:15:06.884 fused_ordering(666) 00:15:06.884 fused_ordering(667) 00:15:06.884 fused_ordering(668) 00:15:06.884 fused_ordering(669) 00:15:06.884 fused_ordering(670) 00:15:06.884 fused_ordering(671) 00:15:06.884 fused_ordering(672) 00:15:06.884 fused_ordering(673) 00:15:06.884 fused_ordering(674) 00:15:06.884 fused_ordering(675) 00:15:06.884 fused_ordering(676) 00:15:06.884 fused_ordering(677) 00:15:06.884 fused_ordering(678) 00:15:06.884 fused_ordering(679) 00:15:06.884 fused_ordering(680) 00:15:06.884 fused_ordering(681) 00:15:06.884 fused_ordering(682) 00:15:06.884 fused_ordering(683) 00:15:06.884 fused_ordering(684) 00:15:06.884 fused_ordering(685) 00:15:06.884 fused_ordering(686) 00:15:06.884 fused_ordering(687) 00:15:06.884 fused_ordering(688) 00:15:06.884 fused_ordering(689) 00:15:06.884 fused_ordering(690) 00:15:06.884 fused_ordering(691) 00:15:06.884 fused_ordering(692) 00:15:06.884 fused_ordering(693) 00:15:06.884 fused_ordering(694) 00:15:06.884 fused_ordering(695) 00:15:06.884 fused_ordering(696) 00:15:06.884 fused_ordering(697) 00:15:06.884 fused_ordering(698) 00:15:06.884 fused_ordering(699) 00:15:06.884 fused_ordering(700) 00:15:06.884 fused_ordering(701) 00:15:06.884 fused_ordering(702) 00:15:06.884 fused_ordering(703) 00:15:06.884 fused_ordering(704) 00:15:06.884 fused_ordering(705) 00:15:06.884 fused_ordering(706) 00:15:06.884 fused_ordering(707) 00:15:06.884 fused_ordering(708) 00:15:06.884 fused_ordering(709) 00:15:06.884 fused_ordering(710) 00:15:06.884 fused_ordering(711) 00:15:06.884 fused_ordering(712) 00:15:06.884 fused_ordering(713) 00:15:06.884 fused_ordering(714) 00:15:06.884 fused_ordering(715) 00:15:06.884 fused_ordering(716) 00:15:06.884 fused_ordering(717) 00:15:06.884 fused_ordering(718) 00:15:06.884 fused_ordering(719) 00:15:06.884 fused_ordering(720) 00:15:06.884 fused_ordering(721) 00:15:06.884 fused_ordering(722) 00:15:06.884 fused_ordering(723) 00:15:06.884 fused_ordering(724) 00:15:06.884 fused_ordering(725) 00:15:06.884 fused_ordering(726) 00:15:06.884 fused_ordering(727) 00:15:06.884 fused_ordering(728) 00:15:06.884 fused_ordering(729) 00:15:06.884 fused_ordering(730) 00:15:06.884 fused_ordering(731) 00:15:06.884 fused_ordering(732) 00:15:06.884 fused_ordering(733) 00:15:06.884 fused_ordering(734) 00:15:06.884 fused_ordering(735) 00:15:06.884 fused_ordering(736) 00:15:06.884 fused_ordering(737) 00:15:06.884 fused_ordering(738) 00:15:06.884 fused_ordering(739) 00:15:06.884 fused_ordering(740) 00:15:06.884 fused_ordering(741) 00:15:06.884 fused_ordering(742) 00:15:06.884 fused_ordering(743) 00:15:06.884 fused_ordering(744) 00:15:06.884 fused_ordering(745) 00:15:06.884 fused_ordering(746) 00:15:06.884 fused_ordering(747) 00:15:06.884 fused_ordering(748) 00:15:06.884 fused_ordering(749) 00:15:06.884 fused_ordering(750) 00:15:06.884 fused_ordering(751) 00:15:06.884 fused_ordering(752) 00:15:06.884 fused_ordering(753) 00:15:06.884 fused_ordering(754) 00:15:06.884 fused_ordering(755) 00:15:06.884 fused_ordering(756) 00:15:06.884 fused_ordering(757) 00:15:06.884 fused_ordering(758) 00:15:06.884 fused_ordering(759) 00:15:06.884 fused_ordering(760) 00:15:06.884 fused_ordering(761) 00:15:06.884 fused_ordering(762) 00:15:06.884 fused_ordering(763) 00:15:06.884 fused_ordering(764) 00:15:06.884 fused_ordering(765) 00:15:06.884 fused_ordering(766) 00:15:06.884 fused_ordering(767) 00:15:06.884 fused_ordering(768) 00:15:06.884 fused_ordering(769) 00:15:06.884 fused_ordering(770) 00:15:06.884 fused_ordering(771) 00:15:06.884 fused_ordering(772) 00:15:06.884 fused_ordering(773) 00:15:06.884 fused_ordering(774) 00:15:06.884 fused_ordering(775) 00:15:06.884 fused_ordering(776) 00:15:06.884 fused_ordering(777) 00:15:06.884 fused_ordering(778) 00:15:06.884 fused_ordering(779) 00:15:06.884 fused_ordering(780) 00:15:06.884 fused_ordering(781) 00:15:06.884 fused_ordering(782) 00:15:06.884 fused_ordering(783) 00:15:06.884 fused_ordering(784) 00:15:06.884 fused_ordering(785) 00:15:06.884 fused_ordering(786) 00:15:06.884 fused_ordering(787) 00:15:06.884 fused_ordering(788) 00:15:06.884 fused_ordering(789) 00:15:06.884 fused_ordering(790) 00:15:06.884 fused_ordering(791) 00:15:06.884 fused_ordering(792) 00:15:06.884 fused_ordering(793) 00:15:06.884 fused_ordering(794) 00:15:06.884 fused_ordering(795) 00:15:06.884 fused_ordering(796) 00:15:06.884 fused_ordering(797) 00:15:06.884 fused_ordering(798) 00:15:06.884 fused_ordering(799) 00:15:06.884 fused_ordering(800) 00:15:06.884 fused_ordering(801) 00:15:06.884 fused_ordering(802) 00:15:06.884 fused_ordering(803) 00:15:06.884 fused_ordering(804) 00:15:06.884 fused_ordering(805) 00:15:06.884 fused_ordering(806) 00:15:06.884 fused_ordering(807) 00:15:06.884 fused_ordering(808) 00:15:06.884 fused_ordering(809) 00:15:06.884 fused_ordering(810) 00:15:06.884 fused_ordering(811) 00:15:06.884 fused_ordering(812) 00:15:06.884 fused_ordering(813) 00:15:06.884 fused_ordering(814) 00:15:06.884 fused_ordering(815) 00:15:06.884 fused_ordering(816) 00:15:06.884 fused_ordering(817) 00:15:06.884 fused_ordering(818) 00:15:06.884 fused_ordering(819) 00:15:06.884 fused_ordering(820) 00:15:07.449 fused_ordering(821) 00:15:07.449 fused_ordering(822) 00:15:07.449 fused_ordering(823) 00:15:07.449 fused_ordering(824) 00:15:07.449 fused_ordering(825) 00:15:07.449 fused_ordering(826) 00:15:07.449 fused_ordering(827) 00:15:07.449 fused_ordering(828) 00:15:07.449 fused_ordering(829) 00:15:07.449 fused_ordering(830) 00:15:07.449 fused_ordering(831) 00:15:07.449 fused_ordering(832) 00:15:07.449 fused_ordering(833) 00:15:07.449 fused_ordering(834) 00:15:07.449 fused_ordering(835) 00:15:07.449 fused_ordering(836) 00:15:07.449 fused_ordering(837) 00:15:07.449 fused_ordering(838) 00:15:07.449 fused_ordering(839) 00:15:07.449 fused_ordering(840) 00:15:07.449 fused_ordering(841) 00:15:07.449 fused_ordering(842) 00:15:07.449 fused_ordering(843) 00:15:07.449 fused_ordering(844) 00:15:07.449 fused_ordering(845) 00:15:07.449 fused_ordering(846) 00:15:07.449 fused_ordering(847) 00:15:07.449 fused_ordering(848) 00:15:07.449 fused_ordering(849) 00:15:07.449 fused_ordering(850) 00:15:07.449 fused_ordering(851) 00:15:07.449 fused_ordering(852) 00:15:07.449 fused_ordering(853) 00:15:07.449 fused_ordering(854) 00:15:07.449 fused_ordering(855) 00:15:07.449 fused_ordering(856) 00:15:07.449 fused_ordering(857) 00:15:07.449 fused_ordering(858) 00:15:07.449 fused_ordering(859) 00:15:07.449 fused_ordering(860) 00:15:07.449 fused_ordering(861) 00:15:07.449 fused_ordering(862) 00:15:07.449 fused_ordering(863) 00:15:07.449 fused_ordering(864) 00:15:07.449 fused_ordering(865) 00:15:07.449 fused_ordering(866) 00:15:07.449 fused_ordering(867) 00:15:07.449 fused_ordering(868) 00:15:07.449 fused_ordering(869) 00:15:07.449 fused_ordering(870) 00:15:07.449 fused_ordering(871) 00:15:07.449 fused_ordering(872) 00:15:07.449 fused_ordering(873) 00:15:07.449 fused_ordering(874) 00:15:07.449 fused_ordering(875) 00:15:07.449 fused_ordering(876) 00:15:07.449 fused_ordering(877) 00:15:07.449 fused_ordering(878) 00:15:07.449 fused_ordering(879) 00:15:07.449 fused_ordering(880) 00:15:07.449 fused_ordering(881) 00:15:07.449 fused_ordering(882) 00:15:07.449 fused_ordering(883) 00:15:07.449 fused_ordering(884) 00:15:07.449 fused_ordering(885) 00:15:07.449 fused_ordering(886) 00:15:07.449 fused_ordering(887) 00:15:07.449 fused_ordering(888) 00:15:07.449 fused_ordering(889) 00:15:07.449 fused_ordering(890) 00:15:07.449 fused_ordering(891) 00:15:07.449 fused_ordering(892) 00:15:07.449 fused_ordering(893) 00:15:07.449 fused_ordering(894) 00:15:07.449 fused_ordering(895) 00:15:07.449 fused_ordering(896) 00:15:07.449 fused_ordering(897) 00:15:07.449 fused_ordering(898) 00:15:07.449 fused_ordering(899) 00:15:07.449 fused_ordering(900) 00:15:07.449 fused_ordering(901) 00:15:07.449 fused_ordering(902) 00:15:07.449 fused_ordering(903) 00:15:07.449 fused_ordering(904) 00:15:07.449 fused_ordering(905) 00:15:07.449 fused_ordering(906) 00:15:07.449 fused_ordering(907) 00:15:07.449 fused_ordering(908) 00:15:07.449 fused_ordering(909) 00:15:07.449 fused_ordering(910) 00:15:07.449 fused_ordering(911) 00:15:07.449 fused_ordering(912) 00:15:07.449 fused_ordering(913) 00:15:07.449 fused_ordering(914) 00:15:07.449 fused_ordering(915) 00:15:07.449 fused_ordering(916) 00:15:07.449 fused_ordering(917) 00:15:07.449 fused_ordering(918) 00:15:07.449 fused_ordering(919) 00:15:07.449 fused_ordering(920) 00:15:07.450 fused_ordering(921) 00:15:07.450 fused_ordering(922) 00:15:07.450 fused_ordering(923) 00:15:07.450 fused_ordering(924) 00:15:07.450 fused_ordering(925) 00:15:07.450 fused_ordering(926) 00:15:07.450 fused_ordering(927) 00:15:07.450 fused_ordering(928) 00:15:07.450 fused_ordering(929) 00:15:07.450 fused_ordering(930) 00:15:07.450 fused_ordering(931) 00:15:07.450 fused_ordering(932) 00:15:07.450 fused_ordering(933) 00:15:07.450 fused_ordering(934) 00:15:07.450 fused_ordering(935) 00:15:07.450 fused_ordering(936) 00:15:07.450 fused_ordering(937) 00:15:07.450 fused_ordering(938) 00:15:07.450 fused_ordering(939) 00:15:07.450 fused_ordering(940) 00:15:07.450 fused_ordering(941) 00:15:07.450 fused_ordering(942) 00:15:07.450 fused_ordering(943) 00:15:07.450 fused_ordering(944) 00:15:07.450 fused_ordering(945) 00:15:07.450 fused_ordering(946) 00:15:07.450 fused_ordering(947) 00:15:07.450 fused_ordering(948) 00:15:07.450 fused_ordering(949) 00:15:07.450 fused_ordering(950) 00:15:07.450 fused_ordering(951) 00:15:07.450 fused_ordering(952) 00:15:07.450 fused_ordering(953) 00:15:07.450 fused_ordering(954) 00:15:07.450 fused_ordering(955) 00:15:07.450 fused_ordering(956) 00:15:07.450 fused_ordering(957) 00:15:07.450 fused_ordering(958) 00:15:07.450 fused_ordering(959) 00:15:07.450 fused_ordering(960) 00:15:07.450 fused_ordering(961) 00:15:07.450 fused_ordering(962) 00:15:07.450 fused_ordering(963) 00:15:07.450 fused_ordering(964) 00:15:07.450 fused_ordering(965) 00:15:07.450 fused_ordering(966) 00:15:07.450 fused_ordering(967) 00:15:07.450 fused_ordering(968) 00:15:07.450 fused_ordering(969) 00:15:07.450 fused_ordering(970) 00:15:07.450 fused_ordering(971) 00:15:07.450 fused_ordering(972) 00:15:07.450 fused_ordering(973) 00:15:07.450 fused_ordering(974) 00:15:07.450 fused_ordering(975) 00:15:07.450 fused_ordering(976) 00:15:07.450 fused_ordering(977) 00:15:07.450 fused_ordering(978) 00:15:07.450 fused_ordering(979) 00:15:07.450 fused_ordering(980) 00:15:07.450 fused_ordering(981) 00:15:07.450 fused_ordering(982) 00:15:07.450 fused_ordering(983) 00:15:07.450 fused_ordering(984) 00:15:07.450 fused_ordering(985) 00:15:07.450 fused_ordering(986) 00:15:07.450 fused_ordering(987) 00:15:07.450 fused_ordering(988) 00:15:07.450 fused_ordering(989) 00:15:07.450 fused_ordering(990) 00:15:07.450 fused_ordering(991) 00:15:07.450 fused_ordering(992) 00:15:07.450 fused_ordering(993) 00:15:07.450 fused_ordering(994) 00:15:07.450 fused_ordering(995) 00:15:07.450 fused_ordering(996) 00:15:07.450 fused_ordering(997) 00:15:07.450 fused_ordering(998) 00:15:07.450 fused_ordering(999) 00:15:07.450 fused_ordering(1000) 00:15:07.450 fused_ordering(1001) 00:15:07.450 fused_ordering(1002) 00:15:07.450 fused_ordering(1003) 00:15:07.450 fused_ordering(1004) 00:15:07.450 fused_ordering(1005) 00:15:07.450 fused_ordering(1006) 00:15:07.450 fused_ordering(1007) 00:15:07.450 fused_ordering(1008) 00:15:07.450 fused_ordering(1009) 00:15:07.450 fused_ordering(1010) 00:15:07.450 fused_ordering(1011) 00:15:07.450 fused_ordering(1012) 00:15:07.450 fused_ordering(1013) 00:15:07.450 fused_ordering(1014) 00:15:07.450 fused_ordering(1015) 00:15:07.450 fused_ordering(1016) 00:15:07.450 fused_ordering(1017) 00:15:07.450 fused_ordering(1018) 00:15:07.450 fused_ordering(1019) 00:15:07.450 fused_ordering(1020) 00:15:07.450 fused_ordering(1021) 00:15:07.450 fused_ordering(1022) 00:15:07.450 fused_ordering(1023) 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:07.450 rmmod nvme_tcp 00:15:07.450 rmmod nvme_fabrics 00:15:07.450 rmmod nvme_keyring 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 75567 ']' 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 75567 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 75567 ']' 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 75567 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.450 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75567 00:15:07.450 killing process with pid 75567 00:15:07.450 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:07.450 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:07.450 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75567' 00:15:07.450 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 75567 00:15:07.450 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 75567 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:07.707 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:15:07.964 00:15:07.964 real 0m4.676s 00:15:07.964 user 0m5.245s 00:15:07.964 sys 0m1.559s 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:07.964 ************************************ 00:15:07.964 END TEST nvmf_fused_ordering 00:15:07.964 ************************************ 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:07.964 ************************************ 00:15:07.964 START TEST nvmf_ns_masking 00:15:07.964 ************************************ 00:15:07.964 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:08.222 * Looking for test storage... 00:15:08.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:08.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.222 --rc genhtml_branch_coverage=1 00:15:08.222 --rc genhtml_function_coverage=1 00:15:08.222 --rc genhtml_legend=1 00:15:08.222 --rc geninfo_all_blocks=1 00:15:08.222 --rc geninfo_unexecuted_blocks=1 00:15:08.222 00:15:08.222 ' 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:08.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.222 --rc genhtml_branch_coverage=1 00:15:08.222 --rc genhtml_function_coverage=1 00:15:08.222 --rc genhtml_legend=1 00:15:08.222 --rc geninfo_all_blocks=1 00:15:08.222 --rc geninfo_unexecuted_blocks=1 00:15:08.222 00:15:08.222 ' 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:08.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.222 --rc genhtml_branch_coverage=1 00:15:08.222 --rc genhtml_function_coverage=1 00:15:08.222 --rc genhtml_legend=1 00:15:08.222 --rc geninfo_all_blocks=1 00:15:08.222 --rc geninfo_unexecuted_blocks=1 00:15:08.222 00:15:08.222 ' 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:08.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.222 --rc genhtml_branch_coverage=1 00:15:08.222 --rc genhtml_function_coverage=1 00:15:08.222 --rc genhtml_legend=1 00:15:08.222 --rc geninfo_all_blocks=1 00:15:08.222 --rc geninfo_unexecuted_blocks=1 00:15:08.222 00:15:08.222 ' 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:08.222 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:08.223 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=dfdad150-c3a4-4742-a9e1-7b270553dcd3 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2b12baa8-65b5-4bac-ad87-b17bafaa9b9d 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a1e2a5c9-59e8-413a-ba5c-265b835864c9 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:08.223 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:08.224 Cannot find device "nvmf_init_br" 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:08.224 Cannot find device "nvmf_init_br2" 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:08.224 Cannot find device "nvmf_tgt_br" 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:15:08.224 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.481 Cannot find device "nvmf_tgt_br2" 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:08.481 Cannot find device "nvmf_init_br" 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:08.481 Cannot find device "nvmf_init_br2" 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:08.481 Cannot find device "nvmf_tgt_br" 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:08.481 Cannot find device "nvmf_tgt_br2" 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:08.481 Cannot find device "nvmf_br" 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:08.481 Cannot find device "nvmf_init_if" 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:08.481 Cannot find device "nvmf_init_if2" 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:08.481 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:08.481 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:08.740 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:08.740 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:15:08.740 00:15:08.740 --- 10.0.0.3 ping statistics --- 00:15:08.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.740 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:08.740 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:08.740 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 00:15:08.740 00:15:08.740 --- 10.0.0.4 ping statistics --- 00:15:08.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.740 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:08.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:15:08.740 00:15:08.740 --- 10.0.0.1 ping statistics --- 00:15:08.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.740 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:08.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:15:08.740 00:15:08.740 --- 10.0.0.2 ping statistics --- 00:15:08.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.740 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=75897 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 75897 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75897 ']' 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.740 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:08.741 [2024-12-06 13:08:57.316436] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:15:08.741 [2024-12-06 13:08:57.316902] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.998 [2024-12-06 13:08:57.476481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.998 [2024-12-06 13:08:57.557133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.998 [2024-12-06 13:08:57.557247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.998 [2024-12-06 13:08:57.557264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.999 [2024-12-06 13:08:57.557276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.999 [2024-12-06 13:08:57.557287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.999 [2024-12-06 13:08:57.557872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.931 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.931 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:09.931 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:09.931 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:09.931 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:09.931 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.931 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:10.192 [2024-12-06 13:08:58.607608] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.192 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:10.192 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:10.192 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:10.451 Malloc1 00:15:10.451 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:10.709 Malloc2 00:15:10.709 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:11.279 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:11.279 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:11.844 [2024-12-06 13:09:00.192104] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:11.844 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:11.844 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a1e2a5c9-59e8-413a-ba5c-265b835864c9 -a 10.0.0.3 -s 4420 -i 4 00:15:11.844 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:11.844 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:11.844 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.844 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:11.844 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:13.744 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:13.744 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:13.744 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.744 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:13.744 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.744 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:13.744 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:13.744 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:14.003 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:14.003 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:14.003 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:14.003 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.003 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:14.003 [ 0]:0x1 00:15:14.003 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.003 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.003 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c619918d15c54c71bbd571639d1efe0d 00:15:14.003 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c619918d15c54c71bbd571639d1efe0d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.003 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:14.262 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:14.262 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.262 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:14.262 [ 0]:0x1 00:15:14.262 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.262 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.262 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c619918d15c54c71bbd571639d1efe0d 00:15:14.262 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c619918d15c54c71bbd571639d1efe0d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.262 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:14.262 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.262 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:14.262 [ 1]:0x2 00:15:14.262 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.262 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.520 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdf79e4686924ec4963a77ac21984746 00:15:14.520 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdf79e4686924ec4963a77ac21984746 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.520 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:14.520 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.520 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.779 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:15.057 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:15.057 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a1e2a5c9-59e8-413a-ba5c-265b835864c9 -a 10.0.0.3 -s 4420 -i 4 00:15:15.057 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:15.057 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:15.057 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.057 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:15.057 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:15.057 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:17.592 [ 0]:0x2 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdf79e4686924ec4963a77ac21984746 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdf79e4686924ec4963a77ac21984746 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.592 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:17.592 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:17.592 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.592 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:17.592 [ 0]:0x1 00:15:17.593 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.593 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.593 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c619918d15c54c71bbd571639d1efe0d 00:15:17.593 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c619918d15c54c71bbd571639d1efe0d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.593 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:17.593 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.593 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:17.593 [ 1]:0x2 00:15:17.593 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.593 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.593 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdf79e4686924ec4963a77ac21984746 00:15:17.593 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdf79e4686924ec4963a77ac21984746 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.593 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:18.160 [ 0]:0x2 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdf79e4686924ec4963a77ac21984746 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdf79e4686924ec4963a77ac21984746 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.160 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:18.419 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:18.419 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a1e2a5c9-59e8-413a-ba5c-265b835864c9 -a 10.0.0.3 -s 4420 -i 4 00:15:18.419 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:18.419 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:18.419 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:18.419 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:18.419 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:18.419 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:20.950 [ 0]:0x1 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c619918d15c54c71bbd571639d1efe0d 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c619918d15c54c71bbd571639d1efe0d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:20.950 [ 1]:0x2 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdf79e4686924ec4963a77ac21984746 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdf79e4686924ec4963a77ac21984746 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:20.950 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:21.209 [ 0]:0x2 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdf79e4686924ec4963a77ac21984746 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdf79e4686924ec4963a77ac21984746 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:21.209 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:21.467 [2024-12-06 13:09:09.988104] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:21.467 2024/12/06 13:09:09 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:15:21.467 request: 00:15:21.467 { 00:15:21.467 "method": "nvmf_ns_remove_host", 00:15:21.467 "params": { 00:15:21.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:21.467 "nsid": 2, 00:15:21.467 "host": "nqn.2016-06.io.spdk:host1" 00:15:21.467 } 00:15:21.467 } 00:15:21.467 Got JSON-RPC error response 00:15:21.467 GoRPCClient: error on JSON-RPC call 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:21.467 [ 0]:0x2 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:21.467 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fdf79e4686924ec4963a77ac21984746 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fdf79e4686924ec4963a77ac21984746 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:21.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76285 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76285 /var/tmp/host.sock 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 76285 ']' 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:21.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.725 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:21.725 [2024-12-06 13:09:10.238977] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:15:21.725 [2024-12-06 13:09:10.239091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76285 ] 00:15:21.982 [2024-12-06 13:09:10.390953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.982 [2024-12-06 13:09:10.462486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.913 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.913 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:22.913 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.913 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:23.171 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid dfdad150-c3a4-4742-a9e1-7b270553dcd3 00:15:23.171 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:23.171 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DFDAD150C3A44742A9E17B270553DCD3 -i 00:15:23.469 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2b12baa8-65b5-4bac-ad87-b17bafaa9b9d 00:15:23.469 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:23.469 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2B12BAA865B54BACAD87B17BAFAA9B9D -i 00:15:23.759 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:24.325 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:24.325 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:24.325 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:24.891 nvme0n1 00:15:24.892 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:24.892 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:25.149 nvme1n2 00:15:25.149 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:25.149 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:25.150 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:25.150 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:25.150 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:25.715 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:25.715 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:25.715 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:25.715 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:25.973 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ dfdad150-c3a4-4742-a9e1-7b270553dcd3 == \d\f\d\a\d\1\5\0\-\c\3\a\4\-\4\7\4\2\-\a\9\e\1\-\7\b\2\7\0\5\5\3\d\c\d\3 ]] 00:15:25.973 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:25.973 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:25.973 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:26.230 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2b12baa8-65b5-4bac-ad87-b17bafaa9b9d == \2\b\1\2\b\a\a\8\-\6\5\b\5\-\4\b\a\c\-\a\d\8\7\-\b\1\7\b\a\f\a\a\9\b\9\d ]] 00:15:26.230 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.488 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid dfdad150-c3a4-4742-a9e1-7b270553dcd3 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g DFDAD150C3A44742A9E17B270553DCD3 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g DFDAD150C3A44742A9E17B270553DCD3 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:26.745 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g DFDAD150C3A44742A9E17B270553DCD3 00:15:27.002 [2024-12-06 13:09:15.622380] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:27.002 [2024-12-06 13:09:15.622435] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:27.002 [2024-12-06 13:09:15.622449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.002 2024/12/06 13:09:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid hide_metadata:%!s(bool=false) nguid:DFDAD150C3A44742A9E17B270553DCD3 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:27.002 request: 00:15:27.002 { 00:15:27.002 "method": "nvmf_subsystem_add_ns", 00:15:27.002 "params": { 00:15:27.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.002 "namespace": { 00:15:27.002 "bdev_name": "invalid", 00:15:27.002 "nsid": 1, 00:15:27.002 "nguid": "DFDAD150C3A44742A9E17B270553DCD3", 00:15:27.002 "no_auto_visible": false, 00:15:27.002 "hide_metadata": false 00:15:27.002 } 00:15:27.002 } 00:15:27.002 } 00:15:27.002 Got JSON-RPC error response 00:15:27.002 GoRPCClient: error on JSON-RPC call 00:15:27.260 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:27.260 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:27.260 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:27.260 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:27.260 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid dfdad150-c3a4-4742-a9e1-7b270553dcd3 00:15:27.260 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:27.260 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DFDAD150C3A44742A9E17B270553DCD3 -i 00:15:27.518 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:29.416 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:29.416 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:29.416 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:29.675 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:29.675 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 76285 00:15:29.675 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 76285 ']' 00:15:29.675 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 76285 00:15:29.675 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:29.675 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.675 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76285 00:15:29.675 killing process with pid 76285 00:15:29.675 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:29.675 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:29.675 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76285' 00:15:29.675 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 76285 00:15:29.675 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 76285 00:15:30.242 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.501 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:30.501 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:30.501 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:30.501 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:30.501 rmmod nvme_tcp 00:15:30.501 rmmod nvme_fabrics 00:15:30.501 rmmod nvme_keyring 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 75897 ']' 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 75897 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75897 ']' 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75897 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75897 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75897' 00:15:30.501 killing process with pid 75897 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75897 00:15:30.501 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75897 00:15:30.759 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:30.759 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:30.759 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:30.759 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:30.759 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:30.759 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:30.759 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:30.759 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:30.759 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:30.759 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:30.759 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:30.759 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:30.759 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.018 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:15:31.018 00:15:31.018 real 0m23.017s 00:15:31.019 user 0m39.378s 00:15:31.019 sys 0m3.494s 00:15:31.019 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.019 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:31.019 ************************************ 00:15:31.019 END TEST nvmf_ns_masking 00:15:31.019 ************************************ 00:15:31.019 13:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:15:31.019 13:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:15:31.019 13:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:31.019 13:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:31.019 13:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.019 13:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.019 ************************************ 00:15:31.019 START TEST nvmf_auth_target 00:15:31.019 ************************************ 00:15:31.019 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:31.279 * Looking for test storage... 00:15:31.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:31.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.279 --rc genhtml_branch_coverage=1 00:15:31.279 --rc genhtml_function_coverage=1 00:15:31.279 --rc genhtml_legend=1 00:15:31.279 --rc geninfo_all_blocks=1 00:15:31.279 --rc geninfo_unexecuted_blocks=1 00:15:31.279 00:15:31.279 ' 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:31.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.279 --rc genhtml_branch_coverage=1 00:15:31.279 --rc genhtml_function_coverage=1 00:15:31.279 --rc genhtml_legend=1 00:15:31.279 --rc geninfo_all_blocks=1 00:15:31.279 --rc geninfo_unexecuted_blocks=1 00:15:31.279 00:15:31.279 ' 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:31.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.279 --rc genhtml_branch_coverage=1 00:15:31.279 --rc genhtml_function_coverage=1 00:15:31.279 --rc genhtml_legend=1 00:15:31.279 --rc geninfo_all_blocks=1 00:15:31.279 --rc geninfo_unexecuted_blocks=1 00:15:31.279 00:15:31.279 ' 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:31.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.279 --rc genhtml_branch_coverage=1 00:15:31.279 --rc genhtml_function_coverage=1 00:15:31.279 --rc genhtml_legend=1 00:15:31.279 --rc geninfo_all_blocks=1 00:15:31.279 --rc geninfo_unexecuted_blocks=1 00:15:31.279 00:15:31.279 ' 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.279 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.280 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:31.280 Cannot find device "nvmf_init_br" 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:31.280 Cannot find device "nvmf_init_br2" 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:31.280 Cannot find device "nvmf_tgt_br" 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.280 Cannot find device "nvmf_tgt_br2" 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:31.280 Cannot find device "nvmf_init_br" 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:31.280 Cannot find device "nvmf_init_br2" 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:31.280 Cannot find device "nvmf_tgt_br" 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:15:31.280 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:31.539 Cannot find device "nvmf_tgt_br2" 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:31.539 Cannot find device "nvmf_br" 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:31.539 Cannot find device "nvmf_init_if" 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:31.539 Cannot find device "nvmf_init_if2" 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.539 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.539 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:31.798 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.798 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:15:31.798 00:15:31.798 --- 10.0.0.3 ping statistics --- 00:15:31.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.798 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:31.798 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:31.798 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:15:31.798 00:15:31.798 --- 10.0.0.4 ping statistics --- 00:15:31.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.798 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:15:31.798 00:15:31.798 --- 10.0.0.1 ping statistics --- 00:15:31.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.798 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:31.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:31.798 00:15:31.798 --- 10.0.0.2 ping statistics --- 00:15:31.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.798 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.798 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=76786 00:15:31.799 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:31.799 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 76786 00:15:31.799 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76786 ']' 00:15:31.799 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.799 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.799 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.799 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.799 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=76830 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fbcc79185694501f7f1c36c1aa6f64c3c1f6e0696b827427 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rvh 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fbcc79185694501f7f1c36c1aa6f64c3c1f6e0696b827427 0 00:15:33.174 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fbcc79185694501f7f1c36c1aa6f64c3c1f6e0696b827427 0 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fbcc79185694501f7f1c36c1aa6f64c3c1f6e0696b827427 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rvh 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rvh 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.rvh 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ff6f9b4a413ef248678458362f2cee671fd17afc5b98dbf07a5d5a038bb799cc 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mLf 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ff6f9b4a413ef248678458362f2cee671fd17afc5b98dbf07a5d5a038bb799cc 3 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ff6f9b4a413ef248678458362f2cee671fd17afc5b98dbf07a5d5a038bb799cc 3 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ff6f9b4a413ef248678458362f2cee671fd17afc5b98dbf07a5d5a038bb799cc 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mLf 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mLf 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.mLf 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3586997f6ccdf02356e115b03c530da8 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5Wa 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3586997f6ccdf02356e115b03c530da8 1 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3586997f6ccdf02356e115b03c530da8 1 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3586997f6ccdf02356e115b03c530da8 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5Wa 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5Wa 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.5Wa 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=29cf9e4c00be947fc0229677aa156365863c4816a29dd955 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.v8x 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 29cf9e4c00be947fc0229677aa156365863c4816a29dd955 2 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 29cf9e4c00be947fc0229677aa156365863c4816a29dd955 2 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=29cf9e4c00be947fc0229677aa156365863c4816a29dd955 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.v8x 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.v8x 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.v8x 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2d4adeb0f289ca889720ce659fec6703fe62583fb638dddd 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.i4P 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2d4adeb0f289ca889720ce659fec6703fe62583fb638dddd 2 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2d4adeb0f289ca889720ce659fec6703fe62583fb638dddd 2 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2d4adeb0f289ca889720ce659fec6703fe62583fb638dddd 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.i4P 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.i4P 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.i4P 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b2ae6ea16dd2d0fd34bb410ec185ff7f 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:33.175 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.4NV 00:15:33.176 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b2ae6ea16dd2d0fd34bb410ec185ff7f 1 00:15:33.176 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b2ae6ea16dd2d0fd34bb410ec185ff7f 1 00:15:33.176 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.176 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.176 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b2ae6ea16dd2d0fd34bb410ec185ff7f 00:15:33.176 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:33.176 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.4NV 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.4NV 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.4NV 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=64f3265b0319840f233f15cc2477b043e2e4a07e4347f5ccbd05bc7aa695771c 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.k7a 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 64f3265b0319840f233f15cc2477b043e2e4a07e4347f5ccbd05bc7aa695771c 3 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 64f3265b0319840f233f15cc2477b043e2e4a07e4347f5ccbd05bc7aa695771c 3 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.435 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=64f3265b0319840f233f15cc2477b043e2e4a07e4347f5ccbd05bc7aa695771c 00:15:33.436 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:33.436 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.436 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.k7a 00:15:33.436 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.k7a 00:15:33.436 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.k7a 00:15:33.436 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:33.436 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 76786 00:15:33.436 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76786 ']' 00:15:33.436 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.436 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.436 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.436 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.436 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.694 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.694 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:33.694 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 76830 /var/tmp/host.sock 00:15:33.694 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76830 ']' 00:15:33.695 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:33.695 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:33.695 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:33.695 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.695 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.962 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.962 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:33.962 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:33.962 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.962 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.962 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.962 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:33.962 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rvh 00:15:33.962 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.963 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.963 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.963 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.rvh 00:15:33.963 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.rvh 00:15:34.539 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.mLf ]] 00:15:34.539 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mLf 00:15:34.539 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.539 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.539 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.539 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mLf 00:15:34.539 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mLf 00:15:34.539 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:34.539 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5Wa 00:15:34.539 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.539 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.796 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.796 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.5Wa 00:15:34.796 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.5Wa 00:15:35.055 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.v8x ]] 00:15:35.055 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.v8x 00:15:35.055 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.055 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.055 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.055 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.v8x 00:15:35.055 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.v8x 00:15:35.312 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:35.312 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.i4P 00:15:35.312 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.312 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.312 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.312 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.i4P 00:15:35.312 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.i4P 00:15:35.569 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.4NV ]] 00:15:35.569 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4NV 00:15:35.569 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.569 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.569 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.569 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4NV 00:15:35.569 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4NV 00:15:35.826 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:35.826 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.k7a 00:15:35.826 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.826 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.826 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.826 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.k7a 00:15:35.826 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.k7a 00:15:36.084 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:36.084 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:36.084 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.084 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.084 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:36.084 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:36.341 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:36.341 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.341 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.341 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:36.341 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:36.341 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.341 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.341 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.341 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.341 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.341 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.341 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.341 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.915 00:15:36.915 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.916 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.916 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.916 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.172 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.172 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.172 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.172 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.172 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.172 { 00:15:37.172 "auth": { 00:15:37.172 "dhgroup": "null", 00:15:37.172 "digest": "sha256", 00:15:37.172 "state": "completed" 00:15:37.172 }, 00:15:37.172 "cntlid": 1, 00:15:37.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:15:37.173 "listen_address": { 00:15:37.173 "adrfam": "IPv4", 00:15:37.173 "traddr": "10.0.0.3", 00:15:37.173 "trsvcid": "4420", 00:15:37.173 "trtype": "TCP" 00:15:37.173 }, 00:15:37.173 "peer_address": { 00:15:37.173 "adrfam": "IPv4", 00:15:37.173 "traddr": "10.0.0.1", 00:15:37.173 "trsvcid": "42868", 00:15:37.173 "trtype": "TCP" 00:15:37.173 }, 00:15:37.173 "qid": 0, 00:15:37.173 "state": "enabled", 00:15:37.173 "thread": "nvmf_tgt_poll_group_000" 00:15:37.173 } 00:15:37.173 ]' 00:15:37.173 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.173 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.173 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.173 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:37.173 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.173 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.173 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.173 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.429 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:15:37.429 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.789 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.789 00:15:42.789 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.789 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.789 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.789 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.081 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.081 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.081 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.081 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.081 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.081 { 00:15:43.081 "auth": { 00:15:43.081 "dhgroup": "null", 00:15:43.081 "digest": "sha256", 00:15:43.081 "state": "completed" 00:15:43.081 }, 00:15:43.081 "cntlid": 3, 00:15:43.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:15:43.081 "listen_address": { 00:15:43.081 "adrfam": "IPv4", 00:15:43.081 "traddr": "10.0.0.3", 00:15:43.081 "trsvcid": "4420", 00:15:43.081 "trtype": "TCP" 00:15:43.081 }, 00:15:43.081 "peer_address": { 00:15:43.081 "adrfam": "IPv4", 00:15:43.081 "traddr": "10.0.0.1", 00:15:43.081 "trsvcid": "53442", 00:15:43.081 "trtype": "TCP" 00:15:43.081 }, 00:15:43.081 "qid": 0, 00:15:43.081 "state": "enabled", 00:15:43.081 "thread": "nvmf_tgt_poll_group_000" 00:15:43.081 } 00:15:43.081 ]' 00:15:43.081 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.081 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.081 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.081 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:43.081 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.081 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.081 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.081 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.340 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:15:43.340 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:15:44.275 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.275 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:44.275 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.275 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.275 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.275 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.275 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:44.275 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:44.533 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:44.533 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.533 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:44.533 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:44.533 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:44.533 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.533 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.533 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.533 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.533 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.533 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.533 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.533 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.791 00:15:44.791 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.791 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.791 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.378 { 00:15:45.378 "auth": { 00:15:45.378 "dhgroup": "null", 00:15:45.378 "digest": "sha256", 00:15:45.378 "state": "completed" 00:15:45.378 }, 00:15:45.378 "cntlid": 5, 00:15:45.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:15:45.378 "listen_address": { 00:15:45.378 "adrfam": "IPv4", 00:15:45.378 "traddr": "10.0.0.3", 00:15:45.378 "trsvcid": "4420", 00:15:45.378 "trtype": "TCP" 00:15:45.378 }, 00:15:45.378 "peer_address": { 00:15:45.378 "adrfam": "IPv4", 00:15:45.378 "traddr": "10.0.0.1", 00:15:45.378 "trsvcid": "53470", 00:15:45.378 "trtype": "TCP" 00:15:45.378 }, 00:15:45.378 "qid": 0, 00:15:45.378 "state": "enabled", 00:15:45.378 "thread": "nvmf_tgt_poll_group_000" 00:15:45.378 } 00:15:45.378 ]' 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.378 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.726 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:15:45.726 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:15:46.319 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.319 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:46.319 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.319 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.319 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.319 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.319 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:46.319 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:46.883 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:46.883 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.883 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.883 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:46.883 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:46.883 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.883 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:15:46.883 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.883 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.883 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.883 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:46.883 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.883 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.140 00:15:47.140 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.140 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.140 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.397 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.397 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.397 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.397 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.397 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.397 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.397 { 00:15:47.397 "auth": { 00:15:47.397 "dhgroup": "null", 00:15:47.397 "digest": "sha256", 00:15:47.397 "state": "completed" 00:15:47.397 }, 00:15:47.397 "cntlid": 7, 00:15:47.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:15:47.397 "listen_address": { 00:15:47.397 "adrfam": "IPv4", 00:15:47.397 "traddr": "10.0.0.3", 00:15:47.397 "trsvcid": "4420", 00:15:47.397 "trtype": "TCP" 00:15:47.397 }, 00:15:47.397 "peer_address": { 00:15:47.397 "adrfam": "IPv4", 00:15:47.397 "traddr": "10.0.0.1", 00:15:47.397 "trsvcid": "53498", 00:15:47.397 "trtype": "TCP" 00:15:47.397 }, 00:15:47.397 "qid": 0, 00:15:47.397 "state": "enabled", 00:15:47.397 "thread": "nvmf_tgt_poll_group_000" 00:15:47.397 } 00:15:47.397 ]' 00:15:47.397 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.397 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.397 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.397 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.397 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.655 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.655 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.655 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.913 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:15:47.913 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:15:48.844 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.844 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:48.844 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.844 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.844 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.844 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.844 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.844 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:48.844 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:48.844 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:48.845 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.845 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.845 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:48.845 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:48.845 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.845 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.845 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.845 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.845 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.845 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.845 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.845 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.410 00:15:49.410 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.410 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.410 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.667 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.667 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.667 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.667 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.668 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.668 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.668 { 00:15:49.668 "auth": { 00:15:49.668 "dhgroup": "ffdhe2048", 00:15:49.668 "digest": "sha256", 00:15:49.668 "state": "completed" 00:15:49.668 }, 00:15:49.668 "cntlid": 9, 00:15:49.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:15:49.668 "listen_address": { 00:15:49.668 "adrfam": "IPv4", 00:15:49.668 "traddr": "10.0.0.3", 00:15:49.668 "trsvcid": "4420", 00:15:49.668 "trtype": "TCP" 00:15:49.668 }, 00:15:49.668 "peer_address": { 00:15:49.668 "adrfam": "IPv4", 00:15:49.668 "traddr": "10.0.0.1", 00:15:49.668 "trsvcid": "53526", 00:15:49.668 "trtype": "TCP" 00:15:49.668 }, 00:15:49.668 "qid": 0, 00:15:49.668 "state": "enabled", 00:15:49.668 "thread": "nvmf_tgt_poll_group_000" 00:15:49.668 } 00:15:49.668 ]' 00:15:49.668 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.668 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.668 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.668 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:49.668 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.924 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.924 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.924 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.181 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:15:50.181 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:15:50.746 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.746 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:50.746 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.746 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.746 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.746 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.746 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.746 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:51.310 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:51.310 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.310 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.310 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:51.310 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.310 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.310 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.310 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.310 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.310 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.310 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.310 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.310 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.567 00:15:51.567 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.567 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.567 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.824 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.824 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.824 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.824 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.824 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.824 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.824 { 00:15:51.824 "auth": { 00:15:51.824 "dhgroup": "ffdhe2048", 00:15:51.824 "digest": "sha256", 00:15:51.824 "state": "completed" 00:15:51.824 }, 00:15:51.824 "cntlid": 11, 00:15:51.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:15:51.824 "listen_address": { 00:15:51.824 "adrfam": "IPv4", 00:15:51.824 "traddr": "10.0.0.3", 00:15:51.824 "trsvcid": "4420", 00:15:51.824 "trtype": "TCP" 00:15:51.824 }, 00:15:51.824 "peer_address": { 00:15:51.824 "adrfam": "IPv4", 00:15:51.824 "traddr": "10.0.0.1", 00:15:51.824 "trsvcid": "53534", 00:15:51.824 "trtype": "TCP" 00:15:51.824 }, 00:15:51.824 "qid": 0, 00:15:51.824 "state": "enabled", 00:15:51.824 "thread": "nvmf_tgt_poll_group_000" 00:15:51.824 } 00:15:51.824 ]' 00:15:51.824 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.824 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.824 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.824 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:51.824 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.081 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.081 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.081 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.338 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:15:52.338 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.352 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.937 00:15:53.937 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.937 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.937 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.195 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.195 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.195 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.195 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.195 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.195 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.195 { 00:15:54.195 "auth": { 00:15:54.195 "dhgroup": "ffdhe2048", 00:15:54.195 "digest": "sha256", 00:15:54.195 "state": "completed" 00:15:54.195 }, 00:15:54.195 "cntlid": 13, 00:15:54.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:15:54.195 "listen_address": { 00:15:54.195 "adrfam": "IPv4", 00:15:54.195 "traddr": "10.0.0.3", 00:15:54.195 "trsvcid": "4420", 00:15:54.195 "trtype": "TCP" 00:15:54.195 }, 00:15:54.195 "peer_address": { 00:15:54.195 "adrfam": "IPv4", 00:15:54.195 "traddr": "10.0.0.1", 00:15:54.195 "trsvcid": "36172", 00:15:54.195 "trtype": "TCP" 00:15:54.195 }, 00:15:54.195 "qid": 0, 00:15:54.195 "state": "enabled", 00:15:54.195 "thread": "nvmf_tgt_poll_group_000" 00:15:54.195 } 00:15:54.195 ]' 00:15:54.195 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.195 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.195 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.195 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.195 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.453 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.453 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.453 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.711 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:15:54.711 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:15:55.277 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.277 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:55.277 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.277 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.277 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.277 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.277 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.277 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.844 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:55.844 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.844 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.844 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:55.844 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.844 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.844 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:15:55.844 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.844 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.844 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.844 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.844 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.844 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.102 00:15:56.102 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.102 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.102 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.360 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.360 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.360 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.360 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.360 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.360 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.360 { 00:15:56.360 "auth": { 00:15:56.360 "dhgroup": "ffdhe2048", 00:15:56.360 "digest": "sha256", 00:15:56.360 "state": "completed" 00:15:56.360 }, 00:15:56.360 "cntlid": 15, 00:15:56.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:15:56.360 "listen_address": { 00:15:56.360 "adrfam": "IPv4", 00:15:56.360 "traddr": "10.0.0.3", 00:15:56.360 "trsvcid": "4420", 00:15:56.360 "trtype": "TCP" 00:15:56.360 }, 00:15:56.360 "peer_address": { 00:15:56.360 "adrfam": "IPv4", 00:15:56.360 "traddr": "10.0.0.1", 00:15:56.360 "trsvcid": "36198", 00:15:56.360 "trtype": "TCP" 00:15:56.360 }, 00:15:56.360 "qid": 0, 00:15:56.360 "state": "enabled", 00:15:56.360 "thread": "nvmf_tgt_poll_group_000" 00:15:56.360 } 00:15:56.360 ]' 00:15:56.360 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.625 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.625 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.625 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.626 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.626 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.626 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.626 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.887 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:15:56.887 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:15:57.820 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.820 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:15:57.820 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.820 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.820 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.820 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.820 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.820 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:57.820 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:58.078 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:58.078 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.078 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.078 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:58.078 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.078 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.078 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.078 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.078 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.078 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.078 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.078 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.078 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.336 00:15:58.336 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.336 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.336 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.594 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.594 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.594 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.594 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.594 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.594 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.594 { 00:15:58.594 "auth": { 00:15:58.594 "dhgroup": "ffdhe3072", 00:15:58.594 "digest": "sha256", 00:15:58.594 "state": "completed" 00:15:58.594 }, 00:15:58.594 "cntlid": 17, 00:15:58.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:15:58.594 "listen_address": { 00:15:58.594 "adrfam": "IPv4", 00:15:58.594 "traddr": "10.0.0.3", 00:15:58.594 "trsvcid": "4420", 00:15:58.594 "trtype": "TCP" 00:15:58.594 }, 00:15:58.594 "peer_address": { 00:15:58.594 "adrfam": "IPv4", 00:15:58.594 "traddr": "10.0.0.1", 00:15:58.594 "trsvcid": "36230", 00:15:58.594 "trtype": "TCP" 00:15:58.594 }, 00:15:58.594 "qid": 0, 00:15:58.594 "state": "enabled", 00:15:58.594 "thread": "nvmf_tgt_poll_group_000" 00:15:58.594 } 00:15:58.594 ]' 00:15:58.594 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.853 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.853 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.853 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:58.853 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.853 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.853 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.853 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.112 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:15:59.112 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:16:00.047 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.047 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:00.047 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.047 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.047 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.047 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.047 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:00.047 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:00.307 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:00.307 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.307 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:00.307 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:00.307 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:00.307 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.307 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.307 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.307 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.307 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.307 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.307 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.307 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.565 00:16:00.565 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.565 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.565 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.200 { 00:16:01.200 "auth": { 00:16:01.200 "dhgroup": "ffdhe3072", 00:16:01.200 "digest": "sha256", 00:16:01.200 "state": "completed" 00:16:01.200 }, 00:16:01.200 "cntlid": 19, 00:16:01.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:01.200 "listen_address": { 00:16:01.200 "adrfam": "IPv4", 00:16:01.200 "traddr": "10.0.0.3", 00:16:01.200 "trsvcid": "4420", 00:16:01.200 "trtype": "TCP" 00:16:01.200 }, 00:16:01.200 "peer_address": { 00:16:01.200 "adrfam": "IPv4", 00:16:01.200 "traddr": "10.0.0.1", 00:16:01.200 "trsvcid": "36260", 00:16:01.200 "trtype": "TCP" 00:16:01.200 }, 00:16:01.200 "qid": 0, 00:16:01.200 "state": "enabled", 00:16:01.200 "thread": "nvmf_tgt_poll_group_000" 00:16:01.200 } 00:16:01.200 ]' 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.200 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.497 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:01.497 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:02.063 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.063 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:02.063 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.063 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.063 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.063 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.063 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.063 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.629 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:02.629 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.629 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.629 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:02.629 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.629 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.630 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.630 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.630 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.630 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.630 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.630 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.630 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.886 00:16:02.886 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.886 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.886 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.143 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.144 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.144 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.144 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.144 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.144 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.144 { 00:16:03.144 "auth": { 00:16:03.144 "dhgroup": "ffdhe3072", 00:16:03.144 "digest": "sha256", 00:16:03.144 "state": "completed" 00:16:03.144 }, 00:16:03.144 "cntlid": 21, 00:16:03.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:03.144 "listen_address": { 00:16:03.144 "adrfam": "IPv4", 00:16:03.144 "traddr": "10.0.0.3", 00:16:03.144 "trsvcid": "4420", 00:16:03.144 "trtype": "TCP" 00:16:03.144 }, 00:16:03.144 "peer_address": { 00:16:03.144 "adrfam": "IPv4", 00:16:03.144 "traddr": "10.0.0.1", 00:16:03.144 "trsvcid": "56894", 00:16:03.144 "trtype": "TCP" 00:16:03.144 }, 00:16:03.144 "qid": 0, 00:16:03.144 "state": "enabled", 00:16:03.144 "thread": "nvmf_tgt_poll_group_000" 00:16:03.144 } 00:16:03.144 ]' 00:16:03.144 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.144 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.144 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.402 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.402 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.402 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.402 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.402 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.661 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:03.661 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:04.225 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.225 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:04.225 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.225 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.225 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.225 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.225 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.225 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.482 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:04.482 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.482 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.482 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.482 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.482 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.482 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:16:04.482 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.482 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.482 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.482 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.482 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.483 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.049 00:16:05.049 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.049 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.049 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.307 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.307 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.307 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.307 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.307 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.307 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.307 { 00:16:05.307 "auth": { 00:16:05.307 "dhgroup": "ffdhe3072", 00:16:05.307 "digest": "sha256", 00:16:05.307 "state": "completed" 00:16:05.307 }, 00:16:05.307 "cntlid": 23, 00:16:05.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:05.307 "listen_address": { 00:16:05.307 "adrfam": "IPv4", 00:16:05.307 "traddr": "10.0.0.3", 00:16:05.307 "trsvcid": "4420", 00:16:05.307 "trtype": "TCP" 00:16:05.307 }, 00:16:05.307 "peer_address": { 00:16:05.307 "adrfam": "IPv4", 00:16:05.307 "traddr": "10.0.0.1", 00:16:05.307 "trsvcid": "56922", 00:16:05.307 "trtype": "TCP" 00:16:05.307 }, 00:16:05.307 "qid": 0, 00:16:05.307 "state": "enabled", 00:16:05.307 "thread": "nvmf_tgt_poll_group_000" 00:16:05.307 } 00:16:05.307 ]' 00:16:05.307 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.307 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.307 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.307 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.307 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.568 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.568 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.568 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.835 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:16:05.835 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:16:06.400 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.400 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:06.400 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.400 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.400 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.400 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.400 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.400 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.400 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.965 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:06.965 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.966 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.966 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.966 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.966 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.966 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.966 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.966 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.966 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.966 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.966 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.966 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.224 00:16:07.224 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.224 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.224 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.482 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.482 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.482 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.482 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.482 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.482 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.482 { 00:16:07.482 "auth": { 00:16:07.482 "dhgroup": "ffdhe4096", 00:16:07.482 "digest": "sha256", 00:16:07.482 "state": "completed" 00:16:07.482 }, 00:16:07.482 "cntlid": 25, 00:16:07.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:07.482 "listen_address": { 00:16:07.482 "adrfam": "IPv4", 00:16:07.482 "traddr": "10.0.0.3", 00:16:07.482 "trsvcid": "4420", 00:16:07.482 "trtype": "TCP" 00:16:07.482 }, 00:16:07.482 "peer_address": { 00:16:07.482 "adrfam": "IPv4", 00:16:07.482 "traddr": "10.0.0.1", 00:16:07.482 "trsvcid": "56948", 00:16:07.482 "trtype": "TCP" 00:16:07.482 }, 00:16:07.482 "qid": 0, 00:16:07.482 "state": "enabled", 00:16:07.482 "thread": "nvmf_tgt_poll_group_000" 00:16:07.482 } 00:16:07.482 ]' 00:16:07.482 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.740 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.740 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.740 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:07.740 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.740 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.740 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.740 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.997 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:16:07.997 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:16:08.563 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.563 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:08.563 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.563 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.821 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.821 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.821 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:08.821 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:09.115 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:09.115 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.115 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.115 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:09.115 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:09.115 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.115 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.115 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.115 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.115 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.115 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.115 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.115 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.374 00:16:09.374 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.374 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.374 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.942 { 00:16:09.942 "auth": { 00:16:09.942 "dhgroup": "ffdhe4096", 00:16:09.942 "digest": "sha256", 00:16:09.942 "state": "completed" 00:16:09.942 }, 00:16:09.942 "cntlid": 27, 00:16:09.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:09.942 "listen_address": { 00:16:09.942 "adrfam": "IPv4", 00:16:09.942 "traddr": "10.0.0.3", 00:16:09.942 "trsvcid": "4420", 00:16:09.942 "trtype": "TCP" 00:16:09.942 }, 00:16:09.942 "peer_address": { 00:16:09.942 "adrfam": "IPv4", 00:16:09.942 "traddr": "10.0.0.1", 00:16:09.942 "trsvcid": "56972", 00:16:09.942 "trtype": "TCP" 00:16:09.942 }, 00:16:09.942 "qid": 0, 00:16:09.942 "state": "enabled", 00:16:09.942 "thread": "nvmf_tgt_poll_group_000" 00:16:09.942 } 00:16:09.942 ]' 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.942 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.201 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:10.202 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:10.775 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.775 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:10.775 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.775 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.775 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.775 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.775 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.775 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.034 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:11.034 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.034 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:11.034 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:11.034 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.034 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.034 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.034 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.034 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.034 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.034 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.034 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.034 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.600 00:16:11.600 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.600 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.600 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.859 { 00:16:11.859 "auth": { 00:16:11.859 "dhgroup": "ffdhe4096", 00:16:11.859 "digest": "sha256", 00:16:11.859 "state": "completed" 00:16:11.859 }, 00:16:11.859 "cntlid": 29, 00:16:11.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:11.859 "listen_address": { 00:16:11.859 "adrfam": "IPv4", 00:16:11.859 "traddr": "10.0.0.3", 00:16:11.859 "trsvcid": "4420", 00:16:11.859 "trtype": "TCP" 00:16:11.859 }, 00:16:11.859 "peer_address": { 00:16:11.859 "adrfam": "IPv4", 00:16:11.859 "traddr": "10.0.0.1", 00:16:11.859 "trsvcid": "57000", 00:16:11.859 "trtype": "TCP" 00:16:11.859 }, 00:16:11.859 "qid": 0, 00:16:11.859 "state": "enabled", 00:16:11.859 "thread": "nvmf_tgt_poll_group_000" 00:16:11.859 } 00:16:11.859 ]' 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.859 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.426 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:12.427 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:12.992 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.992 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:12.992 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.992 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.992 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.992 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.992 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.993 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.250 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:13.250 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.250 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.250 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:13.250 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.250 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.250 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:16:13.250 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.250 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.250 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.250 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.250 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.250 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.818 00:16:13.818 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.818 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.818 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.076 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.076 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.076 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.076 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.076 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.076 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.076 { 00:16:14.076 "auth": { 00:16:14.076 "dhgroup": "ffdhe4096", 00:16:14.076 "digest": "sha256", 00:16:14.076 "state": "completed" 00:16:14.076 }, 00:16:14.076 "cntlid": 31, 00:16:14.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:14.076 "listen_address": { 00:16:14.076 "adrfam": "IPv4", 00:16:14.076 "traddr": "10.0.0.3", 00:16:14.076 "trsvcid": "4420", 00:16:14.076 "trtype": "TCP" 00:16:14.076 }, 00:16:14.076 "peer_address": { 00:16:14.076 "adrfam": "IPv4", 00:16:14.076 "traddr": "10.0.0.1", 00:16:14.076 "trsvcid": "48878", 00:16:14.076 "trtype": "TCP" 00:16:14.076 }, 00:16:14.076 "qid": 0, 00:16:14.076 "state": "enabled", 00:16:14.076 "thread": "nvmf_tgt_poll_group_000" 00:16:14.076 } 00:16:14.076 ]' 00:16:14.076 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.076 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.076 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.334 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.334 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.334 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.334 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.334 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.593 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:16:14.593 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:16:15.558 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.558 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:15.558 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.558 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.558 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.558 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.558 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.558 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:15.558 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:15.558 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:15.558 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.558 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.558 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:15.558 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.558 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.558 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.558 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.558 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.558 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.558 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.558 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.558 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.124 00:16:16.124 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.124 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.124 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.688 { 00:16:16.688 "auth": { 00:16:16.688 "dhgroup": "ffdhe6144", 00:16:16.688 "digest": "sha256", 00:16:16.688 "state": "completed" 00:16:16.688 }, 00:16:16.688 "cntlid": 33, 00:16:16.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:16.688 "listen_address": { 00:16:16.688 "adrfam": "IPv4", 00:16:16.688 "traddr": "10.0.0.3", 00:16:16.688 "trsvcid": "4420", 00:16:16.688 "trtype": "TCP" 00:16:16.688 }, 00:16:16.688 "peer_address": { 00:16:16.688 "adrfam": "IPv4", 00:16:16.688 "traddr": "10.0.0.1", 00:16:16.688 "trsvcid": "48906", 00:16:16.688 "trtype": "TCP" 00:16:16.688 }, 00:16:16.688 "qid": 0, 00:16:16.688 "state": "enabled", 00:16:16.688 "thread": "nvmf_tgt_poll_group_000" 00:16:16.688 } 00:16:16.688 ]' 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.688 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.254 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:16:17.254 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:16:17.820 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.820 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:17.820 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.820 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.820 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.820 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.820 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:17.820 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:18.385 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:18.385 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.385 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.385 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:18.385 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:18.385 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.385 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.385 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.385 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.385 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.385 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.385 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.385 13:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.946 00:16:18.946 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.946 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.946 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.203 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.203 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.203 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.203 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.203 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.203 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.203 { 00:16:19.203 "auth": { 00:16:19.203 "dhgroup": "ffdhe6144", 00:16:19.203 "digest": "sha256", 00:16:19.203 "state": "completed" 00:16:19.203 }, 00:16:19.203 "cntlid": 35, 00:16:19.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:19.203 "listen_address": { 00:16:19.203 "adrfam": "IPv4", 00:16:19.203 "traddr": "10.0.0.3", 00:16:19.203 "trsvcid": "4420", 00:16:19.203 "trtype": "TCP" 00:16:19.203 }, 00:16:19.203 "peer_address": { 00:16:19.203 "adrfam": "IPv4", 00:16:19.203 "traddr": "10.0.0.1", 00:16:19.203 "trsvcid": "48916", 00:16:19.203 "trtype": "TCP" 00:16:19.203 }, 00:16:19.203 "qid": 0, 00:16:19.203 "state": "enabled", 00:16:19.203 "thread": "nvmf_tgt_poll_group_000" 00:16:19.203 } 00:16:19.203 ]' 00:16:19.203 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.203 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.203 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.203 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:19.203 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.461 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.461 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.461 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.718 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:19.718 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:20.284 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.284 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:20.284 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.284 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.284 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.284 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.284 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:20.284 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:20.542 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:20.542 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.542 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.542 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:20.542 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:20.542 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.542 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.542 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.542 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.542 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.542 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.542 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.542 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.108 00:16:21.108 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.108 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.108 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.367 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.367 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.367 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.367 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.367 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.367 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.367 { 00:16:21.367 "auth": { 00:16:21.367 "dhgroup": "ffdhe6144", 00:16:21.367 "digest": "sha256", 00:16:21.367 "state": "completed" 00:16:21.367 }, 00:16:21.367 "cntlid": 37, 00:16:21.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:21.367 "listen_address": { 00:16:21.367 "adrfam": "IPv4", 00:16:21.367 "traddr": "10.0.0.3", 00:16:21.367 "trsvcid": "4420", 00:16:21.367 "trtype": "TCP" 00:16:21.367 }, 00:16:21.367 "peer_address": { 00:16:21.367 "adrfam": "IPv4", 00:16:21.367 "traddr": "10.0.0.1", 00:16:21.367 "trsvcid": "48942", 00:16:21.367 "trtype": "TCP" 00:16:21.367 }, 00:16:21.367 "qid": 0, 00:16:21.367 "state": "enabled", 00:16:21.367 "thread": "nvmf_tgt_poll_group_000" 00:16:21.367 } 00:16:21.367 ]' 00:16:21.367 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.626 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.626 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.626 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:21.626 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.626 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.626 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.626 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.884 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:21.884 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.817 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.075 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.075 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:23.075 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.075 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.332 00:16:23.332 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.332 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.332 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.897 { 00:16:23.897 "auth": { 00:16:23.897 "dhgroup": "ffdhe6144", 00:16:23.897 "digest": "sha256", 00:16:23.897 "state": "completed" 00:16:23.897 }, 00:16:23.897 "cntlid": 39, 00:16:23.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:23.897 "listen_address": { 00:16:23.897 "adrfam": "IPv4", 00:16:23.897 "traddr": "10.0.0.3", 00:16:23.897 "trsvcid": "4420", 00:16:23.897 "trtype": "TCP" 00:16:23.897 }, 00:16:23.897 "peer_address": { 00:16:23.897 "adrfam": "IPv4", 00:16:23.897 "traddr": "10.0.0.1", 00:16:23.897 "trsvcid": "47408", 00:16:23.897 "trtype": "TCP" 00:16:23.897 }, 00:16:23.897 "qid": 0, 00:16:23.897 "state": "enabled", 00:16:23.897 "thread": "nvmf_tgt_poll_group_000" 00:16:23.897 } 00:16:23.897 ]' 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.897 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.154 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:16:24.154 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:16:25.086 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.086 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:25.086 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.086 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.086 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.086 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.086 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.086 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:25.086 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:25.344 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:25.344 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.344 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.344 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:25.344 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:25.344 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.344 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.344 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.344 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.344 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.344 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.344 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.344 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.910 00:16:25.910 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.910 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.910 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.169 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.169 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.169 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.169 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.169 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.169 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.169 { 00:16:26.169 "auth": { 00:16:26.169 "dhgroup": "ffdhe8192", 00:16:26.169 "digest": "sha256", 00:16:26.169 "state": "completed" 00:16:26.169 }, 00:16:26.169 "cntlid": 41, 00:16:26.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:26.169 "listen_address": { 00:16:26.169 "adrfam": "IPv4", 00:16:26.169 "traddr": "10.0.0.3", 00:16:26.169 "trsvcid": "4420", 00:16:26.169 "trtype": "TCP" 00:16:26.169 }, 00:16:26.169 "peer_address": { 00:16:26.169 "adrfam": "IPv4", 00:16:26.169 "traddr": "10.0.0.1", 00:16:26.169 "trsvcid": "47432", 00:16:26.169 "trtype": "TCP" 00:16:26.169 }, 00:16:26.169 "qid": 0, 00:16:26.169 "state": "enabled", 00:16:26.169 "thread": "nvmf_tgt_poll_group_000" 00:16:26.169 } 00:16:26.169 ]' 00:16:26.169 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.427 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.427 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.427 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.427 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.427 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.427 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.427 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.684 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:16:26.684 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:16:27.616 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.616 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:27.616 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.617 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.617 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.617 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.617 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:27.617 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:27.875 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:27.875 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.875 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.875 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:27.875 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.875 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.875 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.875 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.875 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.875 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.875 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.875 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.875 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.830 00:16:28.830 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.830 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.830 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.830 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.830 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.830 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.830 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.830 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.830 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.830 { 00:16:28.830 "auth": { 00:16:28.830 "dhgroup": "ffdhe8192", 00:16:28.830 "digest": "sha256", 00:16:28.830 "state": "completed" 00:16:28.830 }, 00:16:28.830 "cntlid": 43, 00:16:28.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:28.830 "listen_address": { 00:16:28.830 "adrfam": "IPv4", 00:16:28.830 "traddr": "10.0.0.3", 00:16:28.830 "trsvcid": "4420", 00:16:28.830 "trtype": "TCP" 00:16:28.830 }, 00:16:28.830 "peer_address": { 00:16:28.830 "adrfam": "IPv4", 00:16:28.830 "traddr": "10.0.0.1", 00:16:28.830 "trsvcid": "47454", 00:16:28.830 "trtype": "TCP" 00:16:28.830 }, 00:16:28.830 "qid": 0, 00:16:28.830 "state": "enabled", 00:16:28.830 "thread": "nvmf_tgt_poll_group_000" 00:16:28.830 } 00:16:28.830 ]' 00:16:28.830 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.088 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.088 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.088 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:29.088 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.088 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.088 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.088 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.346 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:29.346 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:30.280 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.280 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:30.280 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.280 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.280 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.280 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.280 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:30.280 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:30.538 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:30.539 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.539 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.539 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:30.539 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:30.539 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.539 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.539 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.539 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.539 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.539 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.539 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.539 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.105 00:16:31.105 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.105 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.105 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.670 { 00:16:31.670 "auth": { 00:16:31.670 "dhgroup": "ffdhe8192", 00:16:31.670 "digest": "sha256", 00:16:31.670 "state": "completed" 00:16:31.670 }, 00:16:31.670 "cntlid": 45, 00:16:31.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:31.670 "listen_address": { 00:16:31.670 "adrfam": "IPv4", 00:16:31.670 "traddr": "10.0.0.3", 00:16:31.670 "trsvcid": "4420", 00:16:31.670 "trtype": "TCP" 00:16:31.670 }, 00:16:31.670 "peer_address": { 00:16:31.670 "adrfam": "IPv4", 00:16:31.670 "traddr": "10.0.0.1", 00:16:31.670 "trsvcid": "47480", 00:16:31.670 "trtype": "TCP" 00:16:31.670 }, 00:16:31.670 "qid": 0, 00:16:31.670 "state": "enabled", 00:16:31.670 "thread": "nvmf_tgt_poll_group_000" 00:16:31.670 } 00:16:31.670 ]' 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.670 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.928 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:31.928 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:32.860 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.860 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:32.860 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.860 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.860 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.860 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.860 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:32.860 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:33.117 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:33.117 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.117 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.117 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:33.117 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.117 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.117 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:16:33.117 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.117 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.117 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.117 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.117 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.117 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.683 00:16:33.683 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.683 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.683 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.248 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.248 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.248 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.248 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.248 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.248 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.248 { 00:16:34.248 "auth": { 00:16:34.248 "dhgroup": "ffdhe8192", 00:16:34.248 "digest": "sha256", 00:16:34.248 "state": "completed" 00:16:34.248 }, 00:16:34.248 "cntlid": 47, 00:16:34.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:34.249 "listen_address": { 00:16:34.249 "adrfam": "IPv4", 00:16:34.249 "traddr": "10.0.0.3", 00:16:34.249 "trsvcid": "4420", 00:16:34.249 "trtype": "TCP" 00:16:34.249 }, 00:16:34.249 "peer_address": { 00:16:34.249 "adrfam": "IPv4", 00:16:34.249 "traddr": "10.0.0.1", 00:16:34.249 "trsvcid": "39872", 00:16:34.249 "trtype": "TCP" 00:16:34.249 }, 00:16:34.249 "qid": 0, 00:16:34.249 "state": "enabled", 00:16:34.249 "thread": "nvmf_tgt_poll_group_000" 00:16:34.249 } 00:16:34.249 ]' 00:16:34.249 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.249 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.249 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.249 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:34.249 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.249 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.249 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.249 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.813 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:16:34.813 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:16:35.380 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.380 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:35.380 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.380 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.380 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.380 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:35.380 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.380 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.380 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:35.380 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:35.639 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:35.639 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.639 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.639 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:35.639 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.639 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.639 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.639 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.639 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.639 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.639 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.639 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.639 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.206 00:16:36.206 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.206 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.206 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.558 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.558 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.558 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.558 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.558 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.558 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.558 { 00:16:36.558 "auth": { 00:16:36.558 "dhgroup": "null", 00:16:36.558 "digest": "sha384", 00:16:36.558 "state": "completed" 00:16:36.558 }, 00:16:36.558 "cntlid": 49, 00:16:36.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:36.558 "listen_address": { 00:16:36.558 "adrfam": "IPv4", 00:16:36.558 "traddr": "10.0.0.3", 00:16:36.558 "trsvcid": "4420", 00:16:36.558 "trtype": "TCP" 00:16:36.558 }, 00:16:36.558 "peer_address": { 00:16:36.558 "adrfam": "IPv4", 00:16:36.558 "traddr": "10.0.0.1", 00:16:36.558 "trsvcid": "39908", 00:16:36.558 "trtype": "TCP" 00:16:36.558 }, 00:16:36.558 "qid": 0, 00:16:36.558 "state": "enabled", 00:16:36.558 "thread": "nvmf_tgt_poll_group_000" 00:16:36.558 } 00:16:36.558 ]' 00:16:36.558 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.558 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.558 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.558 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:36.558 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.558 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.558 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.558 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.818 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:16:36.818 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:16:37.750 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.750 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:37.750 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.750 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.750 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.750 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.750 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:37.750 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:38.008 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:38.008 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.008 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.008 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:38.008 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:38.008 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.008 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.008 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.008 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.008 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.008 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.008 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.008 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.266 00:16:38.266 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.266 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.266 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.524 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.524 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.524 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.524 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.524 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.524 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.524 { 00:16:38.525 "auth": { 00:16:38.525 "dhgroup": "null", 00:16:38.525 "digest": "sha384", 00:16:38.525 "state": "completed" 00:16:38.525 }, 00:16:38.525 "cntlid": 51, 00:16:38.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:38.525 "listen_address": { 00:16:38.525 "adrfam": "IPv4", 00:16:38.525 "traddr": "10.0.0.3", 00:16:38.525 "trsvcid": "4420", 00:16:38.525 "trtype": "TCP" 00:16:38.525 }, 00:16:38.525 "peer_address": { 00:16:38.525 "adrfam": "IPv4", 00:16:38.525 "traddr": "10.0.0.1", 00:16:38.525 "trsvcid": "39926", 00:16:38.525 "trtype": "TCP" 00:16:38.525 }, 00:16:38.525 "qid": 0, 00:16:38.525 "state": "enabled", 00:16:38.525 "thread": "nvmf_tgt_poll_group_000" 00:16:38.525 } 00:16:38.525 ]' 00:16:38.525 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.525 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.525 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.783 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.783 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.783 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.783 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.783 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.040 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:39.040 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:39.974 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.974 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:39.974 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.974 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.974 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.974 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.974 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:39.974 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:40.232 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:40.232 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.232 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.232 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:40.232 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:40.232 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.232 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.232 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.232 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.232 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.232 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.232 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.232 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.489 00:16:40.489 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.489 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.489 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.836 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.836 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.836 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.836 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.836 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.836 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.836 { 00:16:40.836 "auth": { 00:16:40.836 "dhgroup": "null", 00:16:40.836 "digest": "sha384", 00:16:40.836 "state": "completed" 00:16:40.836 }, 00:16:40.836 "cntlid": 53, 00:16:40.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:40.836 "listen_address": { 00:16:40.836 "adrfam": "IPv4", 00:16:40.836 "traddr": "10.0.0.3", 00:16:40.836 "trsvcid": "4420", 00:16:40.836 "trtype": "TCP" 00:16:40.836 }, 00:16:40.836 "peer_address": { 00:16:40.836 "adrfam": "IPv4", 00:16:40.836 "traddr": "10.0.0.1", 00:16:40.836 "trsvcid": "39948", 00:16:40.836 "trtype": "TCP" 00:16:40.836 }, 00:16:40.836 "qid": 0, 00:16:40.836 "state": "enabled", 00:16:40.836 "thread": "nvmf_tgt_poll_group_000" 00:16:40.836 } 00:16:40.836 ]' 00:16:40.836 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.836 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.836 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.836 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:40.836 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.108 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.108 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.108 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.365 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:41.365 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:41.930 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.930 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:41.930 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.930 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.930 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.930 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.930 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:41.931 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:42.188 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:42.188 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.188 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:42.188 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:42.188 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:42.188 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.188 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:16:42.188 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.188 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.188 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.188 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:42.188 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.188 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.753 00:16:42.753 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.753 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.753 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.011 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.011 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.011 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.011 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.011 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.011 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.011 { 00:16:43.011 "auth": { 00:16:43.011 "dhgroup": "null", 00:16:43.011 "digest": "sha384", 00:16:43.011 "state": "completed" 00:16:43.011 }, 00:16:43.011 "cntlid": 55, 00:16:43.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:43.011 "listen_address": { 00:16:43.011 "adrfam": "IPv4", 00:16:43.011 "traddr": "10.0.0.3", 00:16:43.011 "trsvcid": "4420", 00:16:43.011 "trtype": "TCP" 00:16:43.011 }, 00:16:43.011 "peer_address": { 00:16:43.011 "adrfam": "IPv4", 00:16:43.011 "traddr": "10.0.0.1", 00:16:43.011 "trsvcid": "48426", 00:16:43.011 "trtype": "TCP" 00:16:43.011 }, 00:16:43.011 "qid": 0, 00:16:43.011 "state": "enabled", 00:16:43.011 "thread": "nvmf_tgt_poll_group_000" 00:16:43.011 } 00:16:43.011 ]' 00:16:43.011 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.011 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.011 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.269 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:43.269 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.269 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.269 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.269 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.528 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:16:43.528 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:16:44.462 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.462 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:44.462 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.462 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.462 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.462 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.462 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.462 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:44.462 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:44.720 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:44.720 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.720 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.720 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:44.720 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:44.720 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.720 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.720 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.720 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.720 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.720 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.720 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.720 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.024 00:16:45.024 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.024 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.024 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.282 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.282 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.282 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.282 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.282 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.282 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.282 { 00:16:45.282 "auth": { 00:16:45.282 "dhgroup": "ffdhe2048", 00:16:45.282 "digest": "sha384", 00:16:45.282 "state": "completed" 00:16:45.282 }, 00:16:45.282 "cntlid": 57, 00:16:45.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:45.282 "listen_address": { 00:16:45.282 "adrfam": "IPv4", 00:16:45.282 "traddr": "10.0.0.3", 00:16:45.282 "trsvcid": "4420", 00:16:45.282 "trtype": "TCP" 00:16:45.282 }, 00:16:45.282 "peer_address": { 00:16:45.282 "adrfam": "IPv4", 00:16:45.282 "traddr": "10.0.0.1", 00:16:45.282 "trsvcid": "48448", 00:16:45.282 "trtype": "TCP" 00:16:45.282 }, 00:16:45.282 "qid": 0, 00:16:45.282 "state": "enabled", 00:16:45.282 "thread": "nvmf_tgt_poll_group_000" 00:16:45.282 } 00:16:45.282 ]' 00:16:45.282 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.541 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.541 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.541 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.541 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.541 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.541 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.541 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.799 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:16:45.799 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:16:46.731 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.731 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:46.731 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.731 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.731 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.731 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.731 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:46.731 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:46.987 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:46.987 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.987 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.987 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:46.987 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:46.987 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.987 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.987 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.987 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.987 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.987 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.987 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.987 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.243 00:16:47.243 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.243 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.243 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.500 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.500 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.500 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.500 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.500 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.500 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.500 { 00:16:47.500 "auth": { 00:16:47.500 "dhgroup": "ffdhe2048", 00:16:47.500 "digest": "sha384", 00:16:47.500 "state": "completed" 00:16:47.500 }, 00:16:47.500 "cntlid": 59, 00:16:47.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:47.500 "listen_address": { 00:16:47.500 "adrfam": "IPv4", 00:16:47.500 "traddr": "10.0.0.3", 00:16:47.500 "trsvcid": "4420", 00:16:47.500 "trtype": "TCP" 00:16:47.500 }, 00:16:47.500 "peer_address": { 00:16:47.500 "adrfam": "IPv4", 00:16:47.500 "traddr": "10.0.0.1", 00:16:47.500 "trsvcid": "48484", 00:16:47.500 "trtype": "TCP" 00:16:47.500 }, 00:16:47.500 "qid": 0, 00:16:47.500 "state": "enabled", 00:16:47.500 "thread": "nvmf_tgt_poll_group_000" 00:16:47.500 } 00:16:47.500 ]' 00:16:47.500 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.500 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.500 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.755 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.755 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.755 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.755 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.755 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.012 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:48.012 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:48.578 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.578 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:48.578 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.578 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.836 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.836 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.836 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:48.836 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.094 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:49.094 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.094 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.094 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:49.094 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.094 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.094 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.094 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.094 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.094 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.094 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.094 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.094 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.352 00:16:49.352 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.352 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.352 13:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.610 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.610 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.610 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.610 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.610 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.610 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.610 { 00:16:49.610 "auth": { 00:16:49.610 "dhgroup": "ffdhe2048", 00:16:49.610 "digest": "sha384", 00:16:49.610 "state": "completed" 00:16:49.610 }, 00:16:49.610 "cntlid": 61, 00:16:49.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:49.610 "listen_address": { 00:16:49.610 "adrfam": "IPv4", 00:16:49.610 "traddr": "10.0.0.3", 00:16:49.610 "trsvcid": "4420", 00:16:49.610 "trtype": "TCP" 00:16:49.610 }, 00:16:49.610 "peer_address": { 00:16:49.610 "adrfam": "IPv4", 00:16:49.610 "traddr": "10.0.0.1", 00:16:49.610 "trsvcid": "48510", 00:16:49.610 "trtype": "TCP" 00:16:49.610 }, 00:16:49.610 "qid": 0, 00:16:49.610 "state": "enabled", 00:16:49.610 "thread": "nvmf_tgt_poll_group_000" 00:16:49.610 } 00:16:49.610 ]' 00:16:49.610 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.610 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.610 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.610 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.868 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.868 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.868 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.868 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.175 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:50.175 13:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:50.739 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.739 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:50.739 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.739 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.739 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.739 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.739 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.739 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:51.304 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:51.304 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.304 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.304 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:51.304 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.304 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.304 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:16:51.304 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.304 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.304 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.305 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.305 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.305 13:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.562 00:16:51.562 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.562 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.562 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.843 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.843 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.843 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.843 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.843 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.843 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.843 { 00:16:51.843 "auth": { 00:16:51.843 "dhgroup": "ffdhe2048", 00:16:51.843 "digest": "sha384", 00:16:51.843 "state": "completed" 00:16:51.843 }, 00:16:51.843 "cntlid": 63, 00:16:51.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:51.843 "listen_address": { 00:16:51.843 "adrfam": "IPv4", 00:16:51.843 "traddr": "10.0.0.3", 00:16:51.843 "trsvcid": "4420", 00:16:51.843 "trtype": "TCP" 00:16:51.843 }, 00:16:51.843 "peer_address": { 00:16:51.843 "adrfam": "IPv4", 00:16:51.843 "traddr": "10.0.0.1", 00:16:51.843 "trsvcid": "48548", 00:16:51.843 "trtype": "TCP" 00:16:51.843 }, 00:16:51.843 "qid": 0, 00:16:51.843 "state": "enabled", 00:16:51.843 "thread": "nvmf_tgt_poll_group_000" 00:16:51.843 } 00:16:51.843 ]' 00:16:51.843 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.843 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.843 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.843 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.843 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.101 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.101 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.101 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.360 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:16:52.360 13:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:16:52.926 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.926 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:52.926 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.926 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.926 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.926 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.926 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.926 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:52.926 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.184 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:53.184 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.184 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.184 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:53.184 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.184 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.184 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.184 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.184 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.184 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.184 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.184 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.184 13:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.750 00:16:53.750 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.751 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.751 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.010 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.010 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.010 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.010 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.010 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.010 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.010 { 00:16:54.010 "auth": { 00:16:54.010 "dhgroup": "ffdhe3072", 00:16:54.010 "digest": "sha384", 00:16:54.010 "state": "completed" 00:16:54.010 }, 00:16:54.010 "cntlid": 65, 00:16:54.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:54.010 "listen_address": { 00:16:54.010 "adrfam": "IPv4", 00:16:54.010 "traddr": "10.0.0.3", 00:16:54.010 "trsvcid": "4420", 00:16:54.010 "trtype": "TCP" 00:16:54.010 }, 00:16:54.010 "peer_address": { 00:16:54.010 "adrfam": "IPv4", 00:16:54.010 "traddr": "10.0.0.1", 00:16:54.010 "trsvcid": "55078", 00:16:54.010 "trtype": "TCP" 00:16:54.010 }, 00:16:54.010 "qid": 0, 00:16:54.010 "state": "enabled", 00:16:54.010 "thread": "nvmf_tgt_poll_group_000" 00:16:54.010 } 00:16:54.010 ]' 00:16:54.010 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.010 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.010 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.269 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.269 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.269 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.269 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.269 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.527 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:16:54.527 13:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:16:55.101 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.101 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:55.101 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.101 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.101 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.101 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.101 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:55.101 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:55.668 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:55.668 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.668 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.668 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:55.668 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.668 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.668 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.668 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.668 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.668 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.668 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.668 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.668 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.927 00:16:55.927 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.927 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.927 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.185 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.185 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.185 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.185 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.185 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.185 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.185 { 00:16:56.185 "auth": { 00:16:56.185 "dhgroup": "ffdhe3072", 00:16:56.185 "digest": "sha384", 00:16:56.185 "state": "completed" 00:16:56.185 }, 00:16:56.185 "cntlid": 67, 00:16:56.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:56.185 "listen_address": { 00:16:56.185 "adrfam": "IPv4", 00:16:56.185 "traddr": "10.0.0.3", 00:16:56.185 "trsvcid": "4420", 00:16:56.185 "trtype": "TCP" 00:16:56.185 }, 00:16:56.185 "peer_address": { 00:16:56.185 "adrfam": "IPv4", 00:16:56.185 "traddr": "10.0.0.1", 00:16:56.185 "trsvcid": "55098", 00:16:56.185 "trtype": "TCP" 00:16:56.185 }, 00:16:56.185 "qid": 0, 00:16:56.185 "state": "enabled", 00:16:56.185 "thread": "nvmf_tgt_poll_group_000" 00:16:56.185 } 00:16:56.185 ]' 00:16:56.185 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.185 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.185 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.444 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:56.444 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.444 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.444 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.444 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.702 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:56.702 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:16:57.267 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.267 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:57.267 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.267 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.267 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.267 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.267 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:57.267 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:57.833 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:57.833 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.833 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.833 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:57.833 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.833 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.833 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.833 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.833 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.833 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.833 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.833 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.833 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.092 00:16:58.092 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.092 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.092 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.351 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.351 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.351 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.351 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.351 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.351 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.351 { 00:16:58.351 "auth": { 00:16:58.351 "dhgroup": "ffdhe3072", 00:16:58.351 "digest": "sha384", 00:16:58.351 "state": "completed" 00:16:58.351 }, 00:16:58.351 "cntlid": 69, 00:16:58.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:16:58.351 "listen_address": { 00:16:58.351 "adrfam": "IPv4", 00:16:58.351 "traddr": "10.0.0.3", 00:16:58.351 "trsvcid": "4420", 00:16:58.351 "trtype": "TCP" 00:16:58.351 }, 00:16:58.351 "peer_address": { 00:16:58.351 "adrfam": "IPv4", 00:16:58.351 "traddr": "10.0.0.1", 00:16:58.351 "trsvcid": "55114", 00:16:58.351 "trtype": "TCP" 00:16:58.351 }, 00:16:58.351 "qid": 0, 00:16:58.351 "state": "enabled", 00:16:58.351 "thread": "nvmf_tgt_poll_group_000" 00:16:58.351 } 00:16:58.351 ]' 00:16:58.351 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.351 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.351 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.351 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:58.351 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.609 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.609 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.609 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.867 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:58.867 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:16:59.434 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.434 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:16:59.434 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.434 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.434 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.434 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.434 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:59.434 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.002 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:00.002 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.002 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.002 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:00.002 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.002 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.002 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:17:00.002 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.002 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.002 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.002 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.002 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.002 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.261 00:17:00.261 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.261 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.261 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.519 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.519 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.519 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.519 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.519 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.519 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.519 { 00:17:00.519 "auth": { 00:17:00.519 "dhgroup": "ffdhe3072", 00:17:00.519 "digest": "sha384", 00:17:00.519 "state": "completed" 00:17:00.519 }, 00:17:00.519 "cntlid": 71, 00:17:00.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:00.519 "listen_address": { 00:17:00.519 "adrfam": "IPv4", 00:17:00.519 "traddr": "10.0.0.3", 00:17:00.519 "trsvcid": "4420", 00:17:00.519 "trtype": "TCP" 00:17:00.519 }, 00:17:00.519 "peer_address": { 00:17:00.519 "adrfam": "IPv4", 00:17:00.519 "traddr": "10.0.0.1", 00:17:00.519 "trsvcid": "55140", 00:17:00.519 "trtype": "TCP" 00:17:00.519 }, 00:17:00.519 "qid": 0, 00:17:00.519 "state": "enabled", 00:17:00.519 "thread": "nvmf_tgt_poll_group_000" 00:17:00.519 } 00:17:00.519 ]' 00:17:00.519 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.519 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.519 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.778 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:00.778 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.778 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.778 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.778 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.037 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:01.037 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:01.972 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.972 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:01.972 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.972 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.972 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.972 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.972 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.972 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:01.973 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.255 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:02.255 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.255 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.255 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:02.255 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.255 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.255 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.255 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.255 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.255 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.255 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.255 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.255 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.513 00:17:02.513 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.513 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.513 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.772 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.772 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.772 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.772 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.772 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.772 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.772 { 00:17:02.772 "auth": { 00:17:02.772 "dhgroup": "ffdhe4096", 00:17:02.772 "digest": "sha384", 00:17:02.772 "state": "completed" 00:17:02.772 }, 00:17:02.772 "cntlid": 73, 00:17:02.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:02.772 "listen_address": { 00:17:02.772 "adrfam": "IPv4", 00:17:02.772 "traddr": "10.0.0.3", 00:17:02.772 "trsvcid": "4420", 00:17:02.772 "trtype": "TCP" 00:17:02.773 }, 00:17:02.773 "peer_address": { 00:17:02.773 "adrfam": "IPv4", 00:17:02.773 "traddr": "10.0.0.1", 00:17:02.773 "trsvcid": "42726", 00:17:02.773 "trtype": "TCP" 00:17:02.773 }, 00:17:02.773 "qid": 0, 00:17:02.773 "state": "enabled", 00:17:02.773 "thread": "nvmf_tgt_poll_group_000" 00:17:02.773 } 00:17:02.773 ]' 00:17:02.773 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.773 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.773 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.030 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:03.030 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.030 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.030 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.030 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.287 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:03.287 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:03.851 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.851 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:03.851 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.851 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.851 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.851 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.851 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:03.851 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:04.108 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:04.108 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.108 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.108 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:04.108 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.108 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.108 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.108 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.108 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.108 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.108 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.108 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.108 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.699 00:17:04.699 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.699 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.699 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.956 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.956 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.956 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.956 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.956 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.956 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.956 { 00:17:04.956 "auth": { 00:17:04.956 "dhgroup": "ffdhe4096", 00:17:04.956 "digest": "sha384", 00:17:04.956 "state": "completed" 00:17:04.956 }, 00:17:04.956 "cntlid": 75, 00:17:04.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:04.956 "listen_address": { 00:17:04.956 "adrfam": "IPv4", 00:17:04.956 "traddr": "10.0.0.3", 00:17:04.956 "trsvcid": "4420", 00:17:04.956 "trtype": "TCP" 00:17:04.956 }, 00:17:04.956 "peer_address": { 00:17:04.956 "adrfam": "IPv4", 00:17:04.956 "traddr": "10.0.0.1", 00:17:04.956 "trsvcid": "42762", 00:17:04.956 "trtype": "TCP" 00:17:04.956 }, 00:17:04.956 "qid": 0, 00:17:04.956 "state": "enabled", 00:17:04.956 "thread": "nvmf_tgt_poll_group_000" 00:17:04.956 } 00:17:04.956 ]' 00:17:04.956 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.956 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.956 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.956 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:04.956 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.214 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.214 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.214 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.471 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:17:05.471 13:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:17:06.035 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.035 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:06.035 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.035 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.035 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.035 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.035 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.035 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.599 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:06.599 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.599 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.599 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:06.599 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.599 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.599 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.599 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.599 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.599 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.599 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.600 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.600 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.858 00:17:06.858 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.858 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.858 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.117 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.117 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.117 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.117 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.117 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.117 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.117 { 00:17:07.117 "auth": { 00:17:07.117 "dhgroup": "ffdhe4096", 00:17:07.117 "digest": "sha384", 00:17:07.117 "state": "completed" 00:17:07.117 }, 00:17:07.117 "cntlid": 77, 00:17:07.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:07.117 "listen_address": { 00:17:07.117 "adrfam": "IPv4", 00:17:07.117 "traddr": "10.0.0.3", 00:17:07.117 "trsvcid": "4420", 00:17:07.117 "trtype": "TCP" 00:17:07.117 }, 00:17:07.117 "peer_address": { 00:17:07.117 "adrfam": "IPv4", 00:17:07.117 "traddr": "10.0.0.1", 00:17:07.117 "trsvcid": "42792", 00:17:07.117 "trtype": "TCP" 00:17:07.117 }, 00:17:07.117 "qid": 0, 00:17:07.117 "state": "enabled", 00:17:07.117 "thread": "nvmf_tgt_poll_group_000" 00:17:07.117 } 00:17:07.117 ]' 00:17:07.117 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.117 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.117 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.398 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.398 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.398 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.398 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.398 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.696 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:17:07.696 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:17:08.262 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.262 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:08.262 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.262 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.262 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.262 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.262 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.262 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.829 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:08.829 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.829 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.829 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:08.829 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.829 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.829 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:17:08.829 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.829 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.829 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.829 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.829 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.829 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.088 00:17:09.088 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.088 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.088 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.359 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.359 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.359 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.359 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.359 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.359 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.359 { 00:17:09.359 "auth": { 00:17:09.359 "dhgroup": "ffdhe4096", 00:17:09.359 "digest": "sha384", 00:17:09.359 "state": "completed" 00:17:09.359 }, 00:17:09.359 "cntlid": 79, 00:17:09.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:09.359 "listen_address": { 00:17:09.359 "adrfam": "IPv4", 00:17:09.359 "traddr": "10.0.0.3", 00:17:09.359 "trsvcid": "4420", 00:17:09.359 "trtype": "TCP" 00:17:09.359 }, 00:17:09.359 "peer_address": { 00:17:09.359 "adrfam": "IPv4", 00:17:09.359 "traddr": "10.0.0.1", 00:17:09.359 "trsvcid": "42828", 00:17:09.359 "trtype": "TCP" 00:17:09.359 }, 00:17:09.359 "qid": 0, 00:17:09.359 "state": "enabled", 00:17:09.359 "thread": "nvmf_tgt_poll_group_000" 00:17:09.359 } 00:17:09.359 ]' 00:17:09.359 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.619 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.619 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.619 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.619 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.619 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.619 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.619 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.879 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:09.879 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.815 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.382 00:17:11.382 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.382 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.382 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.948 { 00:17:11.948 "auth": { 00:17:11.948 "dhgroup": "ffdhe6144", 00:17:11.948 "digest": "sha384", 00:17:11.948 "state": "completed" 00:17:11.948 }, 00:17:11.948 "cntlid": 81, 00:17:11.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:11.948 "listen_address": { 00:17:11.948 "adrfam": "IPv4", 00:17:11.948 "traddr": "10.0.0.3", 00:17:11.948 "trsvcid": "4420", 00:17:11.948 "trtype": "TCP" 00:17:11.948 }, 00:17:11.948 "peer_address": { 00:17:11.948 "adrfam": "IPv4", 00:17:11.948 "traddr": "10.0.0.1", 00:17:11.948 "trsvcid": "42850", 00:17:11.948 "trtype": "TCP" 00:17:11.948 }, 00:17:11.948 "qid": 0, 00:17:11.948 "state": "enabled", 00:17:11.948 "thread": "nvmf_tgt_poll_group_000" 00:17:11.948 } 00:17:11.948 ]' 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.948 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.207 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:12.207 13:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:13.145 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.145 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:13.145 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.145 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.145 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.145 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.145 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.145 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.403 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:13.403 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.403 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.403 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:13.403 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.403 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.403 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.403 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.404 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.404 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.404 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.404 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.404 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.970 00:17:13.970 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.970 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.970 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.228 { 00:17:14.228 "auth": { 00:17:14.228 "dhgroup": "ffdhe6144", 00:17:14.228 "digest": "sha384", 00:17:14.228 "state": "completed" 00:17:14.228 }, 00:17:14.228 "cntlid": 83, 00:17:14.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:14.228 "listen_address": { 00:17:14.228 "adrfam": "IPv4", 00:17:14.228 "traddr": "10.0.0.3", 00:17:14.228 "trsvcid": "4420", 00:17:14.228 "trtype": "TCP" 00:17:14.228 }, 00:17:14.228 "peer_address": { 00:17:14.228 "adrfam": "IPv4", 00:17:14.228 "traddr": "10.0.0.1", 00:17:14.228 "trsvcid": "53240", 00:17:14.228 "trtype": "TCP" 00:17:14.228 }, 00:17:14.228 "qid": 0, 00:17:14.228 "state": "enabled", 00:17:14.228 "thread": "nvmf_tgt_poll_group_000" 00:17:14.228 } 00:17:14.228 ]' 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.228 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.487 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:17:14.487 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:17:15.423 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.423 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:15.423 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.423 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.423 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.423 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.423 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.423 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.681 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:15.681 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.681 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.681 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:15.681 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.681 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.681 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.681 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.681 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.681 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.681 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.681 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.681 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.248 00:17:16.248 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.248 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.248 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.507 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.507 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.507 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.507 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.507 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.507 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.507 { 00:17:16.507 "auth": { 00:17:16.507 "dhgroup": "ffdhe6144", 00:17:16.507 "digest": "sha384", 00:17:16.507 "state": "completed" 00:17:16.507 }, 00:17:16.507 "cntlid": 85, 00:17:16.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:16.507 "listen_address": { 00:17:16.507 "adrfam": "IPv4", 00:17:16.507 "traddr": "10.0.0.3", 00:17:16.507 "trsvcid": "4420", 00:17:16.507 "trtype": "TCP" 00:17:16.507 }, 00:17:16.507 "peer_address": { 00:17:16.507 "adrfam": "IPv4", 00:17:16.507 "traddr": "10.0.0.1", 00:17:16.507 "trsvcid": "53280", 00:17:16.507 "trtype": "TCP" 00:17:16.507 }, 00:17:16.507 "qid": 0, 00:17:16.507 "state": "enabled", 00:17:16.507 "thread": "nvmf_tgt_poll_group_000" 00:17:16.507 } 00:17:16.507 ]' 00:17:16.507 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.507 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.507 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.765 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.765 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.765 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.765 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.765 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.022 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:17:17.022 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:17:17.587 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.587 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:17.587 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.587 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.587 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.587 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.587 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.587 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.154 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:18.154 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.154 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.154 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:18.154 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:18.154 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.154 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:17:18.154 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.154 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.154 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.154 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:18.154 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.154 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.413 00:17:18.671 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.671 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.671 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.962 { 00:17:18.962 "auth": { 00:17:18.962 "dhgroup": "ffdhe6144", 00:17:18.962 "digest": "sha384", 00:17:18.962 "state": "completed" 00:17:18.962 }, 00:17:18.962 "cntlid": 87, 00:17:18.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:18.962 "listen_address": { 00:17:18.962 "adrfam": "IPv4", 00:17:18.962 "traddr": "10.0.0.3", 00:17:18.962 "trsvcid": "4420", 00:17:18.962 "trtype": "TCP" 00:17:18.962 }, 00:17:18.962 "peer_address": { 00:17:18.962 "adrfam": "IPv4", 00:17:18.962 "traddr": "10.0.0.1", 00:17:18.962 "trsvcid": "53304", 00:17:18.962 "trtype": "TCP" 00:17:18.962 }, 00:17:18.962 "qid": 0, 00:17:18.962 "state": "enabled", 00:17:18.962 "thread": "nvmf_tgt_poll_group_000" 00:17:18.962 } 00:17:18.962 ]' 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.962 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.528 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:19.528 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:20.093 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.093 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:20.093 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.093 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.093 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.093 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.093 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.093 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.093 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.350 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:20.350 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.350 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.350 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:20.350 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:20.350 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.350 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.350 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.350 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.350 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.350 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.351 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.351 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.283 00:17:21.283 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.283 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.283 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.540 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.540 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.540 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.540 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.540 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.540 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.540 { 00:17:21.540 "auth": { 00:17:21.540 "dhgroup": "ffdhe8192", 00:17:21.540 "digest": "sha384", 00:17:21.540 "state": "completed" 00:17:21.540 }, 00:17:21.540 "cntlid": 89, 00:17:21.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:21.540 "listen_address": { 00:17:21.540 "adrfam": "IPv4", 00:17:21.540 "traddr": "10.0.0.3", 00:17:21.540 "trsvcid": "4420", 00:17:21.540 "trtype": "TCP" 00:17:21.540 }, 00:17:21.540 "peer_address": { 00:17:21.540 "adrfam": "IPv4", 00:17:21.540 "traddr": "10.0.0.1", 00:17:21.540 "trsvcid": "53330", 00:17:21.540 "trtype": "TCP" 00:17:21.540 }, 00:17:21.540 "qid": 0, 00:17:21.540 "state": "enabled", 00:17:21.540 "thread": "nvmf_tgt_poll_group_000" 00:17:21.540 } 00:17:21.540 ]' 00:17:21.540 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.540 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.540 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.540 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.540 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.540 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.540 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.540 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.798 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:21.798 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:22.731 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.731 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:22.731 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.731 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.731 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.731 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.731 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.731 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.993 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:22.993 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.993 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.993 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:22.993 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:22.993 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.993 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.993 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.993 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.993 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.993 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.993 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.993 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.575 00:17:23.575 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.575 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.575 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.833 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.833 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.833 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.833 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.833 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.833 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.833 { 00:17:23.833 "auth": { 00:17:23.833 "dhgroup": "ffdhe8192", 00:17:23.833 "digest": "sha384", 00:17:23.833 "state": "completed" 00:17:23.833 }, 00:17:23.833 "cntlid": 91, 00:17:23.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:23.833 "listen_address": { 00:17:23.833 "adrfam": "IPv4", 00:17:23.833 "traddr": "10.0.0.3", 00:17:23.833 "trsvcid": "4420", 00:17:23.833 "trtype": "TCP" 00:17:23.833 }, 00:17:23.833 "peer_address": { 00:17:23.833 "adrfam": "IPv4", 00:17:23.833 "traddr": "10.0.0.1", 00:17:23.833 "trsvcid": "35822", 00:17:23.833 "trtype": "TCP" 00:17:23.833 }, 00:17:23.833 "qid": 0, 00:17:23.833 "state": "enabled", 00:17:23.833 "thread": "nvmf_tgt_poll_group_000" 00:17:23.833 } 00:17:23.833 ]' 00:17:23.833 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.833 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.833 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.092 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.092 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.092 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.092 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.092 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.351 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:17:24.351 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:17:25.286 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.286 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:25.286 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.286 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.286 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.286 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.286 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.286 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.544 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:25.544 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.544 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.544 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:25.544 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.544 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.544 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.544 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.544 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.544 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.544 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.544 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.544 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.111 00:17:26.111 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.111 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.111 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.369 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.369 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.369 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.369 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.369 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.369 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.369 { 00:17:26.369 "auth": { 00:17:26.369 "dhgroup": "ffdhe8192", 00:17:26.369 "digest": "sha384", 00:17:26.369 "state": "completed" 00:17:26.369 }, 00:17:26.369 "cntlid": 93, 00:17:26.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:26.369 "listen_address": { 00:17:26.369 "adrfam": "IPv4", 00:17:26.369 "traddr": "10.0.0.3", 00:17:26.369 "trsvcid": "4420", 00:17:26.369 "trtype": "TCP" 00:17:26.369 }, 00:17:26.369 "peer_address": { 00:17:26.369 "adrfam": "IPv4", 00:17:26.369 "traddr": "10.0.0.1", 00:17:26.369 "trsvcid": "35846", 00:17:26.369 "trtype": "TCP" 00:17:26.369 }, 00:17:26.369 "qid": 0, 00:17:26.369 "state": "enabled", 00:17:26.369 "thread": "nvmf_tgt_poll_group_000" 00:17:26.369 } 00:17:26.369 ]' 00:17:26.369 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.627 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.627 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.627 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.627 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.627 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.627 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.627 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.927 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:17:26.927 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:17:27.858 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.858 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:27.858 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.858 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.858 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.858 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.858 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.858 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.116 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:28.116 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.116 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.116 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:28.116 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.116 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.116 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:17:28.116 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.116 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.116 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.116 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.116 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.116 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.682 00:17:28.682 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.682 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.682 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.940 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.940 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.940 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.940 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.940 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.940 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.940 { 00:17:28.940 "auth": { 00:17:28.940 "dhgroup": "ffdhe8192", 00:17:28.940 "digest": "sha384", 00:17:28.940 "state": "completed" 00:17:28.940 }, 00:17:28.940 "cntlid": 95, 00:17:28.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:28.940 "listen_address": { 00:17:28.940 "adrfam": "IPv4", 00:17:28.940 "traddr": "10.0.0.3", 00:17:28.940 "trsvcid": "4420", 00:17:28.940 "trtype": "TCP" 00:17:28.940 }, 00:17:28.940 "peer_address": { 00:17:28.940 "adrfam": "IPv4", 00:17:28.940 "traddr": "10.0.0.1", 00:17:28.940 "trsvcid": "35872", 00:17:28.940 "trtype": "TCP" 00:17:28.940 }, 00:17:28.940 "qid": 0, 00:17:28.940 "state": "enabled", 00:17:28.940 "thread": "nvmf_tgt_poll_group_000" 00:17:28.940 } 00:17:28.940 ]' 00:17:28.940 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.940 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.940 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.198 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:29.198 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.198 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.198 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.198 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.456 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:29.456 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:30.392 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.392 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:30.392 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.392 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.392 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.392 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:30.392 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.392 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.392 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.392 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.651 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:30.651 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.651 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.651 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:30.651 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:30.651 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.651 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.651 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.651 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.651 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.651 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.651 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.651 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.909 00:17:30.909 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.909 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.909 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.167 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.167 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.167 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.167 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.167 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.167 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.167 { 00:17:31.167 "auth": { 00:17:31.167 "dhgroup": "null", 00:17:31.167 "digest": "sha512", 00:17:31.167 "state": "completed" 00:17:31.167 }, 00:17:31.167 "cntlid": 97, 00:17:31.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:31.167 "listen_address": { 00:17:31.167 "adrfam": "IPv4", 00:17:31.167 "traddr": "10.0.0.3", 00:17:31.167 "trsvcid": "4420", 00:17:31.167 "trtype": "TCP" 00:17:31.167 }, 00:17:31.167 "peer_address": { 00:17:31.168 "adrfam": "IPv4", 00:17:31.168 "traddr": "10.0.0.1", 00:17:31.168 "trsvcid": "35888", 00:17:31.168 "trtype": "TCP" 00:17:31.168 }, 00:17:31.168 "qid": 0, 00:17:31.168 "state": "enabled", 00:17:31.168 "thread": "nvmf_tgt_poll_group_000" 00:17:31.168 } 00:17:31.168 ]' 00:17:31.168 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.168 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.168 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.426 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:31.426 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.426 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.426 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.426 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.684 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:31.684 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:32.250 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.250 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:32.250 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.250 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.250 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.250 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.250 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.250 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.816 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:32.816 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.816 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.816 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:32.816 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.816 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.816 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.816 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.816 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.816 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.816 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.816 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.816 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.074 00:17:33.074 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.074 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.074 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.333 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.333 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.333 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.333 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.333 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.333 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.333 { 00:17:33.333 "auth": { 00:17:33.333 "dhgroup": "null", 00:17:33.333 "digest": "sha512", 00:17:33.333 "state": "completed" 00:17:33.333 }, 00:17:33.333 "cntlid": 99, 00:17:33.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:33.333 "listen_address": { 00:17:33.333 "adrfam": "IPv4", 00:17:33.333 "traddr": "10.0.0.3", 00:17:33.333 "trsvcid": "4420", 00:17:33.333 "trtype": "TCP" 00:17:33.333 }, 00:17:33.333 "peer_address": { 00:17:33.333 "adrfam": "IPv4", 00:17:33.333 "traddr": "10.0.0.1", 00:17:33.333 "trsvcid": "55110", 00:17:33.333 "trtype": "TCP" 00:17:33.333 }, 00:17:33.333 "qid": 0, 00:17:33.333 "state": "enabled", 00:17:33.333 "thread": "nvmf_tgt_poll_group_000" 00:17:33.333 } 00:17:33.333 ]' 00:17:33.333 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.333 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.333 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.333 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:33.333 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.591 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.591 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.591 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.849 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:17:33.849 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:17:34.415 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.415 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:34.415 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.415 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.415 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.415 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.415 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:34.415 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:34.981 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:34.981 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.981 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.981 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:34.981 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:34.981 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.981 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.981 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.981 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.981 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.981 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.981 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.981 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.238 00:17:35.238 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.238 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.238 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.496 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.496 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.496 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.496 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.496 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.496 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.496 { 00:17:35.496 "auth": { 00:17:35.496 "dhgroup": "null", 00:17:35.496 "digest": "sha512", 00:17:35.496 "state": "completed" 00:17:35.496 }, 00:17:35.496 "cntlid": 101, 00:17:35.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:35.496 "listen_address": { 00:17:35.496 "adrfam": "IPv4", 00:17:35.496 "traddr": "10.0.0.3", 00:17:35.496 "trsvcid": "4420", 00:17:35.496 "trtype": "TCP" 00:17:35.496 }, 00:17:35.496 "peer_address": { 00:17:35.496 "adrfam": "IPv4", 00:17:35.496 "traddr": "10.0.0.1", 00:17:35.496 "trsvcid": "55136", 00:17:35.496 "trtype": "TCP" 00:17:35.496 }, 00:17:35.496 "qid": 0, 00:17:35.496 "state": "enabled", 00:17:35.496 "thread": "nvmf_tgt_poll_group_000" 00:17:35.496 } 00:17:35.496 ]' 00:17:35.496 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.496 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.496 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.753 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:35.753 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.754 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.754 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.754 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.011 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:17:36.011 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:17:36.574 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.574 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:36.574 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.574 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.574 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.574 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.574 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:36.574 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:36.831 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:36.831 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.831 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.831 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:36.831 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.831 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.831 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:17:36.831 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.831 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.088 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.088 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.088 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.088 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.345 00:17:37.345 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.345 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.345 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.601 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.601 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.601 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.601 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.601 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.601 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.601 { 00:17:37.601 "auth": { 00:17:37.601 "dhgroup": "null", 00:17:37.601 "digest": "sha512", 00:17:37.601 "state": "completed" 00:17:37.601 }, 00:17:37.601 "cntlid": 103, 00:17:37.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:37.601 "listen_address": { 00:17:37.601 "adrfam": "IPv4", 00:17:37.601 "traddr": "10.0.0.3", 00:17:37.601 "trsvcid": "4420", 00:17:37.601 "trtype": "TCP" 00:17:37.601 }, 00:17:37.601 "peer_address": { 00:17:37.601 "adrfam": "IPv4", 00:17:37.601 "traddr": "10.0.0.1", 00:17:37.601 "trsvcid": "55164", 00:17:37.601 "trtype": "TCP" 00:17:37.601 }, 00:17:37.601 "qid": 0, 00:17:37.601 "state": "enabled", 00:17:37.601 "thread": "nvmf_tgt_poll_group_000" 00:17:37.601 } 00:17:37.601 ]' 00:17:37.601 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.601 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.601 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.601 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:37.601 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.859 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.859 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.859 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.116 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:38.116 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:38.681 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.681 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:38.681 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.681 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.681 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.681 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.681 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.681 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.681 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.006 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:39.006 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.006 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.006 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:39.006 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:39.006 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.006 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.006 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.006 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.006 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.006 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.006 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.272 00:17:39.272 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.272 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.272 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.838 { 00:17:39.838 "auth": { 00:17:39.838 "dhgroup": "ffdhe2048", 00:17:39.838 "digest": "sha512", 00:17:39.838 "state": "completed" 00:17:39.838 }, 00:17:39.838 "cntlid": 105, 00:17:39.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:39.838 "listen_address": { 00:17:39.838 "adrfam": "IPv4", 00:17:39.838 "traddr": "10.0.0.3", 00:17:39.838 "trsvcid": "4420", 00:17:39.838 "trtype": "TCP" 00:17:39.838 }, 00:17:39.838 "peer_address": { 00:17:39.838 "adrfam": "IPv4", 00:17:39.838 "traddr": "10.0.0.1", 00:17:39.838 "trsvcid": "55190", 00:17:39.838 "trtype": "TCP" 00:17:39.838 }, 00:17:39.838 "qid": 0, 00:17:39.838 "state": "enabled", 00:17:39.838 "thread": "nvmf_tgt_poll_group_000" 00:17:39.838 } 00:17:39.838 ]' 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.838 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.097 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:40.097 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:41.033 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.034 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.034 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.034 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.292 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.292 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.292 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.292 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.551 00:17:41.551 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.551 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.551 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.810 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.810 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.810 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.810 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.810 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.810 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.810 { 00:17:41.810 "auth": { 00:17:41.810 "dhgroup": "ffdhe2048", 00:17:41.810 "digest": "sha512", 00:17:41.810 "state": "completed" 00:17:41.810 }, 00:17:41.810 "cntlid": 107, 00:17:41.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:41.810 "listen_address": { 00:17:41.810 "adrfam": "IPv4", 00:17:41.810 "traddr": "10.0.0.3", 00:17:41.810 "trsvcid": "4420", 00:17:41.810 "trtype": "TCP" 00:17:41.810 }, 00:17:41.810 "peer_address": { 00:17:41.810 "adrfam": "IPv4", 00:17:41.810 "traddr": "10.0.0.1", 00:17:41.810 "trsvcid": "55228", 00:17:41.810 "trtype": "TCP" 00:17:41.810 }, 00:17:41.810 "qid": 0, 00:17:41.810 "state": "enabled", 00:17:41.810 "thread": "nvmf_tgt_poll_group_000" 00:17:41.810 } 00:17:41.810 ]' 00:17:41.810 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.810 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.810 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.810 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.810 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.069 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.069 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.069 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.338 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:17:42.338 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:17:42.934 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.934 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:42.934 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.934 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.934 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.934 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.934 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:42.934 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:43.192 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:43.192 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.192 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.192 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:43.192 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:43.192 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.192 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.192 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.192 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.452 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.452 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.452 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.452 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.710 00:17:43.710 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.710 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.710 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.968 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.968 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.968 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.968 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.968 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.968 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.968 { 00:17:43.968 "auth": { 00:17:43.968 "dhgroup": "ffdhe2048", 00:17:43.968 "digest": "sha512", 00:17:43.968 "state": "completed" 00:17:43.968 }, 00:17:43.968 "cntlid": 109, 00:17:43.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:43.968 "listen_address": { 00:17:43.968 "adrfam": "IPv4", 00:17:43.968 "traddr": "10.0.0.3", 00:17:43.968 "trsvcid": "4420", 00:17:43.968 "trtype": "TCP" 00:17:43.968 }, 00:17:43.968 "peer_address": { 00:17:43.968 "adrfam": "IPv4", 00:17:43.968 "traddr": "10.0.0.1", 00:17:43.968 "trsvcid": "50888", 00:17:43.968 "trtype": "TCP" 00:17:43.968 }, 00:17:43.968 "qid": 0, 00:17:43.968 "state": "enabled", 00:17:43.968 "thread": "nvmf_tgt_poll_group_000" 00:17:43.968 } 00:17:43.968 ]' 00:17:43.968 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.968 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.968 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.226 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:44.226 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.226 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.226 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.226 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.485 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:17:44.485 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:17:45.050 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.050 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:45.050 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.050 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.050 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.050 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.050 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.050 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.617 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:45.617 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.617 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.617 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:45.617 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:45.617 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.617 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:17:45.617 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.617 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.617 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.617 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:45.617 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.617 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.876 00:17:45.876 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.876 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.876 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.136 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.136 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.136 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.136 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.136 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.136 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.136 { 00:17:46.136 "auth": { 00:17:46.136 "dhgroup": "ffdhe2048", 00:17:46.136 "digest": "sha512", 00:17:46.136 "state": "completed" 00:17:46.136 }, 00:17:46.136 "cntlid": 111, 00:17:46.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:46.136 "listen_address": { 00:17:46.136 "adrfam": "IPv4", 00:17:46.136 "traddr": "10.0.0.3", 00:17:46.136 "trsvcid": "4420", 00:17:46.136 "trtype": "TCP" 00:17:46.136 }, 00:17:46.136 "peer_address": { 00:17:46.136 "adrfam": "IPv4", 00:17:46.136 "traddr": "10.0.0.1", 00:17:46.136 "trsvcid": "50920", 00:17:46.136 "trtype": "TCP" 00:17:46.136 }, 00:17:46.136 "qid": 0, 00:17:46.136 "state": "enabled", 00:17:46.136 "thread": "nvmf_tgt_poll_group_000" 00:17:46.136 } 00:17:46.136 ]' 00:17:46.136 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.136 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.136 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.394 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.394 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.394 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.394 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.395 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.762 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:46.762 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:47.328 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.328 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:47.328 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.328 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.328 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.328 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.328 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.328 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.328 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.586 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:47.586 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.586 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.586 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:47.586 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:47.586 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.586 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.586 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.586 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.586 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.586 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.586 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.586 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.153 00:17:48.153 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.153 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.153 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.412 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.412 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.412 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.412 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.412 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.412 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.412 { 00:17:48.412 "auth": { 00:17:48.412 "dhgroup": "ffdhe3072", 00:17:48.412 "digest": "sha512", 00:17:48.412 "state": "completed" 00:17:48.412 }, 00:17:48.412 "cntlid": 113, 00:17:48.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:48.412 "listen_address": { 00:17:48.412 "adrfam": "IPv4", 00:17:48.412 "traddr": "10.0.0.3", 00:17:48.412 "trsvcid": "4420", 00:17:48.412 "trtype": "TCP" 00:17:48.412 }, 00:17:48.412 "peer_address": { 00:17:48.412 "adrfam": "IPv4", 00:17:48.412 "traddr": "10.0.0.1", 00:17:48.412 "trsvcid": "50940", 00:17:48.412 "trtype": "TCP" 00:17:48.412 }, 00:17:48.412 "qid": 0, 00:17:48.412 "state": "enabled", 00:17:48.412 "thread": "nvmf_tgt_poll_group_000" 00:17:48.412 } 00:17:48.412 ]' 00:17:48.412 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.412 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.412 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.412 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.412 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.670 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.670 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.670 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.929 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:48.929 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:49.496 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.496 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:49.496 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.496 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.496 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.496 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.496 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.496 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.754 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:49.754 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.754 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.754 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:49.754 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:49.754 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.754 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.754 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.754 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.755 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.755 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.755 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.755 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.320 00:17:50.320 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.320 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.320 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.631 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.631 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.631 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.631 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.631 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.631 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.631 { 00:17:50.631 "auth": { 00:17:50.631 "dhgroup": "ffdhe3072", 00:17:50.631 "digest": "sha512", 00:17:50.631 "state": "completed" 00:17:50.631 }, 00:17:50.631 "cntlid": 115, 00:17:50.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:50.631 "listen_address": { 00:17:50.631 "adrfam": "IPv4", 00:17:50.631 "traddr": "10.0.0.3", 00:17:50.631 "trsvcid": "4420", 00:17:50.631 "trtype": "TCP" 00:17:50.631 }, 00:17:50.631 "peer_address": { 00:17:50.631 "adrfam": "IPv4", 00:17:50.631 "traddr": "10.0.0.1", 00:17:50.631 "trsvcid": "50966", 00:17:50.631 "trtype": "TCP" 00:17:50.631 }, 00:17:50.631 "qid": 0, 00:17:50.631 "state": "enabled", 00:17:50.631 "thread": "nvmf_tgt_poll_group_000" 00:17:50.631 } 00:17:50.631 ]' 00:17:50.632 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.632 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.632 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.632 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:50.632 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.632 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.632 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.632 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.197 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:17:51.197 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:17:51.765 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.765 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:51.765 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.765 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.765 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.765 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.765 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.765 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.023 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:52.023 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.023 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.023 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:52.024 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.024 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.024 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.024 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.024 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.024 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.024 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.024 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.024 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.588 00:17:52.588 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.588 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.588 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.846 { 00:17:52.846 "auth": { 00:17:52.846 "dhgroup": "ffdhe3072", 00:17:52.846 "digest": "sha512", 00:17:52.846 "state": "completed" 00:17:52.846 }, 00:17:52.846 "cntlid": 117, 00:17:52.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:52.846 "listen_address": { 00:17:52.846 "adrfam": "IPv4", 00:17:52.846 "traddr": "10.0.0.3", 00:17:52.846 "trsvcid": "4420", 00:17:52.846 "trtype": "TCP" 00:17:52.846 }, 00:17:52.846 "peer_address": { 00:17:52.846 "adrfam": "IPv4", 00:17:52.846 "traddr": "10.0.0.1", 00:17:52.846 "trsvcid": "59378", 00:17:52.846 "trtype": "TCP" 00:17:52.846 }, 00:17:52.846 "qid": 0, 00:17:52.846 "state": "enabled", 00:17:52.846 "thread": "nvmf_tgt_poll_group_000" 00:17:52.846 } 00:17:52.846 ]' 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.846 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.412 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:17:53.412 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:17:53.978 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.978 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:53.978 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.978 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.978 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.979 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.979 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.979 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.237 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:54.237 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.237 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.237 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:54.237 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.237 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.237 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:17:54.237 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.237 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.237 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.237 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.237 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.237 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.495 00:17:54.495 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.495 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.495 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.075 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.075 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.075 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.075 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.075 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.075 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.075 { 00:17:55.075 "auth": { 00:17:55.075 "dhgroup": "ffdhe3072", 00:17:55.075 "digest": "sha512", 00:17:55.075 "state": "completed" 00:17:55.075 }, 00:17:55.075 "cntlid": 119, 00:17:55.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:55.075 "listen_address": { 00:17:55.075 "adrfam": "IPv4", 00:17:55.075 "traddr": "10.0.0.3", 00:17:55.075 "trsvcid": "4420", 00:17:55.075 "trtype": "TCP" 00:17:55.075 }, 00:17:55.075 "peer_address": { 00:17:55.075 "adrfam": "IPv4", 00:17:55.075 "traddr": "10.0.0.1", 00:17:55.075 "trsvcid": "59406", 00:17:55.075 "trtype": "TCP" 00:17:55.075 }, 00:17:55.075 "qid": 0, 00:17:55.075 "state": "enabled", 00:17:55.075 "thread": "nvmf_tgt_poll_group_000" 00:17:55.075 } 00:17:55.075 ]' 00:17:55.076 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.076 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.076 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.076 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.076 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.076 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.076 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.076 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.335 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:55.335 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:17:55.901 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.159 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:56.159 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.159 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.159 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.159 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.159 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.159 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.159 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.418 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:56.418 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.418 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.418 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:56.418 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.418 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.418 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.418 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.418 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.418 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.418 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.418 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.418 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.677 00:17:56.677 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.677 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.677 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.243 { 00:17:57.243 "auth": { 00:17:57.243 "dhgroup": "ffdhe4096", 00:17:57.243 "digest": "sha512", 00:17:57.243 "state": "completed" 00:17:57.243 }, 00:17:57.243 "cntlid": 121, 00:17:57.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:57.243 "listen_address": { 00:17:57.243 "adrfam": "IPv4", 00:17:57.243 "traddr": "10.0.0.3", 00:17:57.243 "trsvcid": "4420", 00:17:57.243 "trtype": "TCP" 00:17:57.243 }, 00:17:57.243 "peer_address": { 00:17:57.243 "adrfam": "IPv4", 00:17:57.243 "traddr": "10.0.0.1", 00:17:57.243 "trsvcid": "59430", 00:17:57.243 "trtype": "TCP" 00:17:57.243 }, 00:17:57.243 "qid": 0, 00:17:57.243 "state": "enabled", 00:17:57.243 "thread": "nvmf_tgt_poll_group_000" 00:17:57.243 } 00:17:57.243 ]' 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.243 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.501 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:57.501 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:17:58.068 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.068 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:17:58.068 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.068 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.327 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.327 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.327 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.327 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.585 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:58.585 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.585 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.585 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:58.585 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.585 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.585 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.585 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.585 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.585 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.585 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.585 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.585 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.843 00:17:58.843 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.843 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.843 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.183 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.183 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.183 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.183 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.183 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.183 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.183 { 00:17:59.183 "auth": { 00:17:59.183 "dhgroup": "ffdhe4096", 00:17:59.183 "digest": "sha512", 00:17:59.183 "state": "completed" 00:17:59.183 }, 00:17:59.183 "cntlid": 123, 00:17:59.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:17:59.183 "listen_address": { 00:17:59.183 "adrfam": "IPv4", 00:17:59.183 "traddr": "10.0.0.3", 00:17:59.183 "trsvcid": "4420", 00:17:59.183 "trtype": "TCP" 00:17:59.183 }, 00:17:59.183 "peer_address": { 00:17:59.183 "adrfam": "IPv4", 00:17:59.183 "traddr": "10.0.0.1", 00:17:59.183 "trsvcid": "59450", 00:17:59.183 "trtype": "TCP" 00:17:59.183 }, 00:17:59.183 "qid": 0, 00:17:59.183 "state": "enabled", 00:17:59.183 "thread": "nvmf_tgt_poll_group_000" 00:17:59.183 } 00:17:59.183 ]' 00:17:59.183 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.183 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.183 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.443 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.443 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.443 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.443 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.443 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.702 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:17:59.702 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:18:00.270 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.270 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:00.270 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.270 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.270 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.270 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.270 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.270 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.836 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:00.836 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.836 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.836 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:00.836 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.836 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.836 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.836 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.836 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.836 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.836 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.836 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.836 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.094 00:18:01.094 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.094 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.094 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.352 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.352 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.352 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.352 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.352 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.352 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.352 { 00:18:01.352 "auth": { 00:18:01.352 "dhgroup": "ffdhe4096", 00:18:01.352 "digest": "sha512", 00:18:01.352 "state": "completed" 00:18:01.352 }, 00:18:01.352 "cntlid": 125, 00:18:01.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:01.352 "listen_address": { 00:18:01.352 "adrfam": "IPv4", 00:18:01.352 "traddr": "10.0.0.3", 00:18:01.352 "trsvcid": "4420", 00:18:01.352 "trtype": "TCP" 00:18:01.352 }, 00:18:01.352 "peer_address": { 00:18:01.352 "adrfam": "IPv4", 00:18:01.352 "traddr": "10.0.0.1", 00:18:01.352 "trsvcid": "59482", 00:18:01.352 "trtype": "TCP" 00:18:01.352 }, 00:18:01.352 "qid": 0, 00:18:01.352 "state": "enabled", 00:18:01.352 "thread": "nvmf_tgt_poll_group_000" 00:18:01.352 } 00:18:01.352 ]' 00:18:01.352 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.609 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.609 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.609 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.609 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.609 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.609 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.609 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.868 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:18:01.868 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:18:02.802 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.802 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:02.802 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.802 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.802 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.802 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.802 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.802 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.061 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:03.061 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.061 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.061 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:03.061 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.061 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.061 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:18:03.061 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.061 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.061 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.061 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.061 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.061 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.319 00:18:03.319 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.319 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.319 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.885 { 00:18:03.885 "auth": { 00:18:03.885 "dhgroup": "ffdhe4096", 00:18:03.885 "digest": "sha512", 00:18:03.885 "state": "completed" 00:18:03.885 }, 00:18:03.885 "cntlid": 127, 00:18:03.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:03.885 "listen_address": { 00:18:03.885 "adrfam": "IPv4", 00:18:03.885 "traddr": "10.0.0.3", 00:18:03.885 "trsvcid": "4420", 00:18:03.885 "trtype": "TCP" 00:18:03.885 }, 00:18:03.885 "peer_address": { 00:18:03.885 "adrfam": "IPv4", 00:18:03.885 "traddr": "10.0.0.1", 00:18:03.885 "trsvcid": "44856", 00:18:03.885 "trtype": "TCP" 00:18:03.885 }, 00:18:03.885 "qid": 0, 00:18:03.885 "state": "enabled", 00:18:03.885 "thread": "nvmf_tgt_poll_group_000" 00:18:03.885 } 00:18:03.885 ]' 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.885 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.143 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:18:04.143 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:18:04.710 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.969 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:04.969 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.969 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.969 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.969 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.969 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.969 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.969 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.227 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:05.227 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.227 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.227 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:05.227 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.227 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.227 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.227 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.227 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.227 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.227 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.227 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.227 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.792 00:18:05.792 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.792 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.792 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.050 { 00:18:06.050 "auth": { 00:18:06.050 "dhgroup": "ffdhe6144", 00:18:06.050 "digest": "sha512", 00:18:06.050 "state": "completed" 00:18:06.050 }, 00:18:06.050 "cntlid": 129, 00:18:06.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:06.050 "listen_address": { 00:18:06.050 "adrfam": "IPv4", 00:18:06.050 "traddr": "10.0.0.3", 00:18:06.050 "trsvcid": "4420", 00:18:06.050 "trtype": "TCP" 00:18:06.050 }, 00:18:06.050 "peer_address": { 00:18:06.050 "adrfam": "IPv4", 00:18:06.050 "traddr": "10.0.0.1", 00:18:06.050 "trsvcid": "44892", 00:18:06.050 "trtype": "TCP" 00:18:06.050 }, 00:18:06.050 "qid": 0, 00:18:06.050 "state": "enabled", 00:18:06.050 "thread": "nvmf_tgt_poll_group_000" 00:18:06.050 } 00:18:06.050 ]' 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.050 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.308 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:18:06.308 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:18:07.240 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.240 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:07.240 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.240 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.240 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.240 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.240 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.240 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.503 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:07.503 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.503 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.503 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:07.503 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:07.503 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.503 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.503 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.503 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.503 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.503 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.503 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.503 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.068 00:18:08.068 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.068 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.068 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.327 { 00:18:08.327 "auth": { 00:18:08.327 "dhgroup": "ffdhe6144", 00:18:08.327 "digest": "sha512", 00:18:08.327 "state": "completed" 00:18:08.327 }, 00:18:08.327 "cntlid": 131, 00:18:08.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:08.327 "listen_address": { 00:18:08.327 "adrfam": "IPv4", 00:18:08.327 "traddr": "10.0.0.3", 00:18:08.327 "trsvcid": "4420", 00:18:08.327 "trtype": "TCP" 00:18:08.327 }, 00:18:08.327 "peer_address": { 00:18:08.327 "adrfam": "IPv4", 00:18:08.327 "traddr": "10.0.0.1", 00:18:08.327 "trsvcid": "44914", 00:18:08.327 "trtype": "TCP" 00:18:08.327 }, 00:18:08.327 "qid": 0, 00:18:08.327 "state": "enabled", 00:18:08.327 "thread": "nvmf_tgt_poll_group_000" 00:18:08.327 } 00:18:08.327 ]' 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.327 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.892 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:18:08.892 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:18:09.457 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.457 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:09.457 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.457 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.457 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.457 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.457 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.457 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.715 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:09.715 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.715 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.715 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:09.715 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:09.716 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.716 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.716 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.716 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.716 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.716 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.716 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.716 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.281 00:18:10.539 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.539 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.539 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.796 { 00:18:10.796 "auth": { 00:18:10.796 "dhgroup": "ffdhe6144", 00:18:10.796 "digest": "sha512", 00:18:10.796 "state": "completed" 00:18:10.796 }, 00:18:10.796 "cntlid": 133, 00:18:10.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:10.796 "listen_address": { 00:18:10.796 "adrfam": "IPv4", 00:18:10.796 "traddr": "10.0.0.3", 00:18:10.796 "trsvcid": "4420", 00:18:10.796 "trtype": "TCP" 00:18:10.796 }, 00:18:10.796 "peer_address": { 00:18:10.796 "adrfam": "IPv4", 00:18:10.796 "traddr": "10.0.0.1", 00:18:10.796 "trsvcid": "44926", 00:18:10.796 "trtype": "TCP" 00:18:10.796 }, 00:18:10.796 "qid": 0, 00:18:10.796 "state": "enabled", 00:18:10.796 "thread": "nvmf_tgt_poll_group_000" 00:18:10.796 } 00:18:10.796 ]' 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.796 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.359 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:18:11.359 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:18:11.924 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.924 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:11.924 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.924 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.924 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.924 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.924 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.924 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.182 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:12.182 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.182 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.182 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:12.182 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.182 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.182 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:18:12.182 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.182 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.182 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.182 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.182 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.182 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.748 00:18:12.748 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.748 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.748 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.006 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.006 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.006 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.006 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.006 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.006 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.006 { 00:18:13.006 "auth": { 00:18:13.006 "dhgroup": "ffdhe6144", 00:18:13.006 "digest": "sha512", 00:18:13.006 "state": "completed" 00:18:13.006 }, 00:18:13.006 "cntlid": 135, 00:18:13.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:13.006 "listen_address": { 00:18:13.006 "adrfam": "IPv4", 00:18:13.006 "traddr": "10.0.0.3", 00:18:13.006 "trsvcid": "4420", 00:18:13.006 "trtype": "TCP" 00:18:13.006 }, 00:18:13.006 "peer_address": { 00:18:13.006 "adrfam": "IPv4", 00:18:13.006 "traddr": "10.0.0.1", 00:18:13.006 "trsvcid": "35468", 00:18:13.006 "trtype": "TCP" 00:18:13.006 }, 00:18:13.006 "qid": 0, 00:18:13.006 "state": "enabled", 00:18:13.006 "thread": "nvmf_tgt_poll_group_000" 00:18:13.006 } 00:18:13.006 ]' 00:18:13.006 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.263 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.263 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.263 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.263 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.263 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.263 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.263 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.521 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:18:13.521 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:18:14.088 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.088 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:14.088 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.088 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.088 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.088 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.088 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.088 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.088 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.656 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:14.656 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.656 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.656 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:14.656 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:14.656 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.656 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.656 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.656 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.656 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.656 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.656 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.656 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.223 00:18:15.223 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.223 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.223 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.482 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.482 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.482 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.482 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.482 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.482 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.482 { 00:18:15.482 "auth": { 00:18:15.482 "dhgroup": "ffdhe8192", 00:18:15.482 "digest": "sha512", 00:18:15.482 "state": "completed" 00:18:15.482 }, 00:18:15.482 "cntlid": 137, 00:18:15.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:15.482 "listen_address": { 00:18:15.482 "adrfam": "IPv4", 00:18:15.482 "traddr": "10.0.0.3", 00:18:15.482 "trsvcid": "4420", 00:18:15.482 "trtype": "TCP" 00:18:15.482 }, 00:18:15.482 "peer_address": { 00:18:15.482 "adrfam": "IPv4", 00:18:15.482 "traddr": "10.0.0.1", 00:18:15.482 "trsvcid": "35502", 00:18:15.482 "trtype": "TCP" 00:18:15.482 }, 00:18:15.482 "qid": 0, 00:18:15.482 "state": "enabled", 00:18:15.482 "thread": "nvmf_tgt_poll_group_000" 00:18:15.482 } 00:18:15.482 ]' 00:18:15.482 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.740 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.740 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.740 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.741 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.741 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.741 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.741 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.059 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:18:16.059 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:18:16.643 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.643 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:16.643 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.643 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.643 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.643 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.643 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.643 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.211 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:17.211 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.211 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.211 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:17.211 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.211 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.211 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.211 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.211 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.211 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.211 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.211 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.211 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.777 00:18:17.777 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.777 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.777 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.036 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.036 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.036 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.036 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.036 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.036 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.036 { 00:18:18.036 "auth": { 00:18:18.036 "dhgroup": "ffdhe8192", 00:18:18.036 "digest": "sha512", 00:18:18.036 "state": "completed" 00:18:18.036 }, 00:18:18.036 "cntlid": 139, 00:18:18.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:18.036 "listen_address": { 00:18:18.036 "adrfam": "IPv4", 00:18:18.036 "traddr": "10.0.0.3", 00:18:18.036 "trsvcid": "4420", 00:18:18.036 "trtype": "TCP" 00:18:18.036 }, 00:18:18.036 "peer_address": { 00:18:18.036 "adrfam": "IPv4", 00:18:18.036 "traddr": "10.0.0.1", 00:18:18.036 "trsvcid": "35538", 00:18:18.036 "trtype": "TCP" 00:18:18.036 }, 00:18:18.036 "qid": 0, 00:18:18.036 "state": "enabled", 00:18:18.036 "thread": "nvmf_tgt_poll_group_000" 00:18:18.036 } 00:18:18.036 ]' 00:18:18.036 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.295 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.295 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.295 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.295 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.295 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.295 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.295 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.553 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:18:18.553 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: --dhchap-ctrl-secret DHHC-1:02:MjljZjllNGMwMGJlOTQ3ZmMwMjI5Njc3YWExNTYzNjU4NjNjNDgxNmEyOWRkOTU1whXyTQ==: 00:18:19.485 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.485 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:19.485 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.485 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.485 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.485 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.485 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.485 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.485 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:19.485 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.744 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.744 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:19.744 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.744 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.744 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.744 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.744 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.744 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.744 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.744 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.744 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.339 00:18:20.339 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.339 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.339 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.602 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.602 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.602 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.602 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.861 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.861 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.861 { 00:18:20.861 "auth": { 00:18:20.861 "dhgroup": "ffdhe8192", 00:18:20.861 "digest": "sha512", 00:18:20.861 "state": "completed" 00:18:20.861 }, 00:18:20.861 "cntlid": 141, 00:18:20.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:20.861 "listen_address": { 00:18:20.861 "adrfam": "IPv4", 00:18:20.861 "traddr": "10.0.0.3", 00:18:20.861 "trsvcid": "4420", 00:18:20.861 "trtype": "TCP" 00:18:20.861 }, 00:18:20.861 "peer_address": { 00:18:20.861 "adrfam": "IPv4", 00:18:20.861 "traddr": "10.0.0.1", 00:18:20.861 "trsvcid": "35558", 00:18:20.861 "trtype": "TCP" 00:18:20.861 }, 00:18:20.861 "qid": 0, 00:18:20.861 "state": "enabled", 00:18:20.861 "thread": "nvmf_tgt_poll_group_000" 00:18:20.861 } 00:18:20.861 ]' 00:18:20.861 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.861 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.861 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.861 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.861 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.861 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.861 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.861 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.119 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:18:21.119 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:01:YjJhZTZlYTE2ZGQyZDBmZDM0YmI0MTBlYzE4NWZmN2b43IMU: 00:18:21.686 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.686 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:21.686 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.686 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.944 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.944 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.944 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:21.944 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.226 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:22.226 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.226 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.226 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.226 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:22.226 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.226 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:18:22.226 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.226 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.226 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.226 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.226 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.226 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.795 00:18:22.795 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.795 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.795 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.090 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.090 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.090 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.090 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.090 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.090 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.090 { 00:18:23.090 "auth": { 00:18:23.090 "dhgroup": "ffdhe8192", 00:18:23.090 "digest": "sha512", 00:18:23.090 "state": "completed" 00:18:23.090 }, 00:18:23.090 "cntlid": 143, 00:18:23.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:23.090 "listen_address": { 00:18:23.090 "adrfam": "IPv4", 00:18:23.090 "traddr": "10.0.0.3", 00:18:23.090 "trsvcid": "4420", 00:18:23.090 "trtype": "TCP" 00:18:23.090 }, 00:18:23.090 "peer_address": { 00:18:23.090 "adrfam": "IPv4", 00:18:23.090 "traddr": "10.0.0.1", 00:18:23.090 "trsvcid": "40008", 00:18:23.090 "trtype": "TCP" 00:18:23.090 }, 00:18:23.090 "qid": 0, 00:18:23.090 "state": "enabled", 00:18:23.090 "thread": "nvmf_tgt_poll_group_000" 00:18:23.090 } 00:18:23.090 ]' 00:18:23.090 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.090 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.090 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.348 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.348 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.348 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.348 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.348 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.606 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:18:23.606 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:18:24.541 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.541 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:24.541 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.541 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.541 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.541 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:24.541 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:24.541 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:24.541 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.541 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.541 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.541 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:24.541 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.541 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.541 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.541 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:24.541 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.541 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.541 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.541 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.541 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.541 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.541 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.542 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.478 00:18:25.478 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.478 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.478 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.738 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.738 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.738 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.738 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.738 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.738 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.738 { 00:18:25.738 "auth": { 00:18:25.738 "dhgroup": "ffdhe8192", 00:18:25.738 "digest": "sha512", 00:18:25.738 "state": "completed" 00:18:25.738 }, 00:18:25.738 "cntlid": 145, 00:18:25.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:25.738 "listen_address": { 00:18:25.738 "adrfam": "IPv4", 00:18:25.738 "traddr": "10.0.0.3", 00:18:25.738 "trsvcid": "4420", 00:18:25.738 "trtype": "TCP" 00:18:25.738 }, 00:18:25.738 "peer_address": { 00:18:25.738 "adrfam": "IPv4", 00:18:25.738 "traddr": "10.0.0.1", 00:18:25.738 "trsvcid": "40050", 00:18:25.738 "trtype": "TCP" 00:18:25.738 }, 00:18:25.738 "qid": 0, 00:18:25.738 "state": "enabled", 00:18:25.738 "thread": "nvmf_tgt_poll_group_000" 00:18:25.738 } 00:18:25.738 ]' 00:18:25.738 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.738 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.738 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.738 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.738 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.996 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.996 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.996 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.254 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:18:26.254 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:00:ZmJjYzc5MTg1Njk0NTAxZjdmMWMzNmMxYWE2ZjY0YzNjMWY2ZTA2OTZiODI3NDI3GfZKWQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY2ZjliNGE0MTNlZjI0ODY3ODQ1ODM2MmYyY2VlNjcxZmQxN2FmYzViOThkYmYwN2E1ZDVhMDM4YmI3OTljYwfb28Q=: 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:26.825 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:27.760 2024/12/06 13:12:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:27.760 request: 00:18:27.760 { 00:18:27.760 "method": "bdev_nvme_attach_controller", 00:18:27.760 "params": { 00:18:27.760 "name": "nvme0", 00:18:27.760 "trtype": "tcp", 00:18:27.761 "traddr": "10.0.0.3", 00:18:27.761 "adrfam": "ipv4", 00:18:27.761 "trsvcid": "4420", 00:18:27.761 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:27.761 "prchk_reftag": false, 00:18:27.761 "prchk_guard": false, 00:18:27.761 "hdgst": false, 00:18:27.761 "ddgst": false, 00:18:27.761 "dhchap_key": "key2", 00:18:27.761 "allow_unrecognized_csi": false 00:18:27.761 } 00:18:27.761 } 00:18:27.761 Got JSON-RPC error response 00:18:27.761 GoRPCClient: error on JSON-RPC call 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:27.761 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.327 2024/12/06 13:12:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:28.327 request: 00:18:28.327 { 00:18:28.327 "method": "bdev_nvme_attach_controller", 00:18:28.327 "params": { 00:18:28.327 "name": "nvme0", 00:18:28.327 "trtype": "tcp", 00:18:28.327 "traddr": "10.0.0.3", 00:18:28.327 "adrfam": "ipv4", 00:18:28.327 "trsvcid": "4420", 00:18:28.327 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:28.327 "prchk_reftag": false, 00:18:28.327 "prchk_guard": false, 00:18:28.327 "hdgst": false, 00:18:28.327 "ddgst": false, 00:18:28.327 "dhchap_key": "key1", 00:18:28.327 "dhchap_ctrlr_key": "ckey2", 00:18:28.327 "allow_unrecognized_csi": false 00:18:28.327 } 00:18:28.327 } 00:18:28.327 Got JSON-RPC error response 00:18:28.327 GoRPCClient: error on JSON-RPC call 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.327 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.894 2024/12/06 13:12:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:28.894 request: 00:18:28.894 { 00:18:28.894 "method": "bdev_nvme_attach_controller", 00:18:28.894 "params": { 00:18:28.894 "name": "nvme0", 00:18:28.894 "trtype": "tcp", 00:18:28.894 "traddr": "10.0.0.3", 00:18:28.894 "adrfam": "ipv4", 00:18:28.894 "trsvcid": "4420", 00:18:28.894 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:28.894 "prchk_reftag": false, 00:18:28.894 "prchk_guard": false, 00:18:28.894 "hdgst": false, 00:18:28.894 "ddgst": false, 00:18:28.894 "dhchap_key": "key1", 00:18:28.894 "dhchap_ctrlr_key": "ckey1", 00:18:28.894 "allow_unrecognized_csi": false 00:18:28.894 } 00:18:28.894 } 00:18:28.894 Got JSON-RPC error response 00:18:28.894 GoRPCClient: error on JSON-RPC call 00:18:28.894 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:28.894 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.894 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.894 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.894 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:28.894 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.894 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.894 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.894 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 76786 00:18:28.894 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76786 ']' 00:18:28.894 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76786 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76786 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:29.163 killing process with pid 76786 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76786' 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76786 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76786 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=81820 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 81820 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81820 ']' 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.163 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81820 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81820 ']' 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.767 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.025 null0 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rvh 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.mLf ]] 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mLf 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:30.025 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5Wa 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.v8x ]] 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.v8x 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.i4P 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.4NV ]] 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4NV 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.k7a 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.026 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.284 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.284 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:30.284 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.284 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.218 nvme0n1 00:18:31.218 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.218 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.218 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.475 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.475 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.475 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.475 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.475 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.475 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.475 { 00:18:31.475 "auth": { 00:18:31.475 "dhgroup": "ffdhe8192", 00:18:31.475 "digest": "sha512", 00:18:31.475 "state": "completed" 00:18:31.475 }, 00:18:31.475 "cntlid": 1, 00:18:31.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:31.475 "listen_address": { 00:18:31.475 "adrfam": "IPv4", 00:18:31.475 "traddr": "10.0.0.3", 00:18:31.475 "trsvcid": "4420", 00:18:31.475 "trtype": "TCP" 00:18:31.475 }, 00:18:31.475 "peer_address": { 00:18:31.475 "adrfam": "IPv4", 00:18:31.475 "traddr": "10.0.0.1", 00:18:31.475 "trsvcid": "40106", 00:18:31.475 "trtype": "TCP" 00:18:31.475 }, 00:18:31.475 "qid": 0, 00:18:31.475 "state": "enabled", 00:18:31.475 "thread": "nvmf_tgt_poll_group_000" 00:18:31.475 } 00:18:31.475 ]' 00:18:31.475 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.475 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.475 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.732 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.732 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.732 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.732 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.732 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.989 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:18:31.989 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:18:32.921 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.921 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:32.921 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.921 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.921 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.921 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key3 00:18:32.921 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.921 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.921 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.921 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:32.921 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:33.179 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:33.179 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:33.179 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:33.179 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:33.179 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.179 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:33.179 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.179 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:33.179 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.179 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.437 2024/12/06 13:12:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:33.437 request: 00:18:33.437 { 00:18:33.437 "method": "bdev_nvme_attach_controller", 00:18:33.437 "params": { 00:18:33.437 "name": "nvme0", 00:18:33.437 "trtype": "tcp", 00:18:33.437 "traddr": "10.0.0.3", 00:18:33.437 "adrfam": "ipv4", 00:18:33.437 "trsvcid": "4420", 00:18:33.437 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:33.437 "prchk_reftag": false, 00:18:33.437 "prchk_guard": false, 00:18:33.437 "hdgst": false, 00:18:33.437 "ddgst": false, 00:18:33.437 "dhchap_key": "key3", 00:18:33.437 "allow_unrecognized_csi": false 00:18:33.437 } 00:18:33.437 } 00:18:33.437 Got JSON-RPC error response 00:18:33.437 GoRPCClient: error on JSON-RPC call 00:18:33.437 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:33.437 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.437 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.437 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.437 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:33.437 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:33.437 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:33.438 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:34.053 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:34.053 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:34.053 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:34.053 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:34.053 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.053 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:34.053 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.053 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.053 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.053 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.312 2024/12/06 13:12:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:34.312 request: 00:18:34.312 { 00:18:34.312 "method": "bdev_nvme_attach_controller", 00:18:34.312 "params": { 00:18:34.312 "name": "nvme0", 00:18:34.312 "trtype": "tcp", 00:18:34.312 "traddr": "10.0.0.3", 00:18:34.312 "adrfam": "ipv4", 00:18:34.312 "trsvcid": "4420", 00:18:34.312 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:34.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:34.312 "prchk_reftag": false, 00:18:34.312 "prchk_guard": false, 00:18:34.312 "hdgst": false, 00:18:34.312 "ddgst": false, 00:18:34.312 "dhchap_key": "key3", 00:18:34.312 "allow_unrecognized_csi": false 00:18:34.312 } 00:18:34.312 } 00:18:34.312 Got JSON-RPC error response 00:18:34.312 GoRPCClient: error on JSON-RPC call 00:18:34.312 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:34.312 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.312 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.312 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.312 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:34.312 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:34.312 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:34.312 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.312 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.313 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:34.571 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.138 2024/12/06 13:12:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:35.138 request: 00:18:35.138 { 00:18:35.138 "method": "bdev_nvme_attach_controller", 00:18:35.138 "params": { 00:18:35.138 "name": "nvme0", 00:18:35.138 "trtype": "tcp", 00:18:35.138 "traddr": "10.0.0.3", 00:18:35.138 "adrfam": "ipv4", 00:18:35.138 "trsvcid": "4420", 00:18:35.138 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:35.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:35.138 "prchk_reftag": false, 00:18:35.138 "prchk_guard": false, 00:18:35.138 "hdgst": false, 00:18:35.138 "ddgst": false, 00:18:35.138 "dhchap_key": "key0", 00:18:35.138 "dhchap_ctrlr_key": "key1", 00:18:35.138 "allow_unrecognized_csi": false 00:18:35.138 } 00:18:35.138 } 00:18:35.138 Got JSON-RPC error response 00:18:35.138 GoRPCClient: error on JSON-RPC call 00:18:35.138 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:35.138 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:35.138 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:35.138 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:35.138 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:35.138 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:35.138 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:35.396 nvme0n1 00:18:35.396 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:35.396 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.396 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:35.964 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.964 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.964 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.964 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 00:18:35.964 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.964 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.964 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.964 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:35.964 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:35.964 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:36.899 nvme0n1 00:18:37.155 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:37.155 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.155 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:37.413 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.413 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.413 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.413 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.413 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.413 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:37.413 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:37.413 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.669 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.669 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:18:37.669 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid 02865f8e-a4d1-44f6-b22e-478c939aabc8 -l 0 --dhchap-secret DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: --dhchap-ctrl-secret DHHC-1:03:NjRmMzI2NWIwMzE5ODQwZjIzM2YxNWNjMjQ3N2IwNDNlMmU0YTA3ZTQzNDdmNWNjYmQwNWJjN2FhNjk1NzcxYyKN6yI=: 00:18:38.311 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:38.311 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:38.311 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:38.311 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:38.311 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:38.311 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:38.311 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:38.311 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.311 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.569 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:38.569 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:38.569 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:38.569 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:38.569 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:38.569 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:38.569 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:38.569 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:38.569 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:38.569 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:39.180 2024/12/06 13:12:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:39.180 request: 00:18:39.180 { 00:18:39.180 "method": "bdev_nvme_attach_controller", 00:18:39.180 "params": { 00:18:39.180 "name": "nvme0", 00:18:39.180 "trtype": "tcp", 00:18:39.180 "traddr": "10.0.0.3", 00:18:39.180 "adrfam": "ipv4", 00:18:39.180 "trsvcid": "4420", 00:18:39.180 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:39.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8", 00:18:39.180 "prchk_reftag": false, 00:18:39.180 "prchk_guard": false, 00:18:39.180 "hdgst": false, 00:18:39.180 "ddgst": false, 00:18:39.180 "dhchap_key": "key1", 00:18:39.180 "allow_unrecognized_csi": false 00:18:39.180 } 00:18:39.180 } 00:18:39.180 Got JSON-RPC error response 00:18:39.180 GoRPCClient: error on JSON-RPC call 00:18:39.436 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:39.436 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.436 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.436 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.436 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.436 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.436 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:40.368 nvme0n1 00:18:40.368 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:40.368 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:40.368 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.626 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.626 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.626 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.191 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:41.191 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.191 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.191 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.191 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:41.191 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:41.191 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:41.449 nvme0n1 00:18:41.449 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:41.449 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.449 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:41.708 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.708 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.708 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.274 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:42.274 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.274 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.274 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.275 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: '' 2s 00:18:42.275 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:42.275 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:42.275 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: 00:18:42.275 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:42.275 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:42.275 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:42.275 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: ]] 00:18:42.275 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MzU4Njk5N2Y2Y2NkZjAyMzU2ZTExNWIwM2M1MzBkYTjzHyUL: 00:18:42.275 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:42.275 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:42.275 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:44.177 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:44.177 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:44.177 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:44.177 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:44.177 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:44.177 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:44.177 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:44.177 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:44.177 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.177 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.177 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.177 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: 2s 00:18:44.178 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:44.178 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:44.178 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:44.178 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: 00:18:44.178 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:44.178 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:44.178 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:44.178 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: ]] 00:18:44.178 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MmQ0YWRlYjBmMjg5Y2E4ODk3MjBjZTY1OWZlYzY3MDNmZTYyNTgzZmI2MzhkZGRkZ/80hw==: 00:18:44.178 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:44.178 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:46.707 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:47.272 nvme0n1 00:18:47.272 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.272 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.272 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.272 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.272 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.272 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.209 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:48.209 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.209 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:48.468 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.468 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:48.468 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.468 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.468 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.468 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:48.468 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:48.729 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:48.729 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:48.729 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:48.987 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:49.553 2024/12/06 13:12:38 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:18:49.553 request: 00:18:49.553 { 00:18:49.553 "method": "bdev_nvme_set_keys", 00:18:49.553 "params": { 00:18:49.553 "name": "nvme0", 00:18:49.553 "dhchap_key": "key1", 00:18:49.553 "dhchap_ctrlr_key": "key3" 00:18:49.553 } 00:18:49.553 } 00:18:49.553 Got JSON-RPC error response 00:18:49.553 GoRPCClient: error on JSON-RPC call 00:18:49.811 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:49.811 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:49.811 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:49.811 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:49.811 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:49.811 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.811 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:50.070 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:50.070 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:51.030 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:51.030 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:51.030 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.288 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:51.288 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:51.288 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.288 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.288 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.288 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:51.288 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:51.288 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:52.665 nvme0n1 00:18:52.665 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:52.665 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.665 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.665 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.665 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:52.665 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:52.665 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:52.665 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:52.665 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.665 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:52.665 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.665 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:52.665 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:52.923 2024/12/06 13:12:41 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:18:52.923 request: 00:18:52.923 { 00:18:52.923 "method": "bdev_nvme_set_keys", 00:18:52.923 "params": { 00:18:52.923 "name": "nvme0", 00:18:52.923 "dhchap_key": "key2", 00:18:52.923 "dhchap_ctrlr_key": "key0" 00:18:52.923 } 00:18:52.923 } 00:18:52.923 Got JSON-RPC error response 00:18:52.923 GoRPCClient: error on JSON-RPC call 00:18:53.181 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:53.181 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.181 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.181 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.181 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:53.181 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.181 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:53.439 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:53.439 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:54.372 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:54.372 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:54.372 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.644 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:54.644 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:54.644 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:54.645 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 76830 00:18:54.645 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76830 ']' 00:18:54.645 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76830 00:18:54.645 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:54.645 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.645 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76830 00:18:54.645 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:54.645 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:54.645 killing process with pid 76830 00:18:54.645 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76830' 00:18:54.645 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76830 00:18:54.645 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76830 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:55.211 rmmod nvme_tcp 00:18:55.211 rmmod nvme_fabrics 00:18:55.211 rmmod nvme_keyring 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 81820 ']' 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 81820 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81820 ']' 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81820 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81820 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.211 killing process with pid 81820 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81820' 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81820 00:18:55.211 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81820 00:18:55.469 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:55.469 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:55.469 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:55.469 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:55.469 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:55.469 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:55.469 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:55.469 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:55.469 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:55.469 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:55.469 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:55.469 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:55.469 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:55.469 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:55.469 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:55.469 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:55.469 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:55.469 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:55.469 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:55.725 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:55.725 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:55.725 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:55.725 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:55.725 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.725 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.rvh /tmp/spdk.key-sha256.5Wa /tmp/spdk.key-sha384.i4P /tmp/spdk.key-sha512.k7a /tmp/spdk.key-sha512.mLf /tmp/spdk.key-sha384.v8x /tmp/spdk.key-sha256.4NV '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:18:55.726 00:18:55.726 real 3m24.592s 00:18:55.726 user 8m18.667s 00:18:55.726 sys 0m25.063s 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.726 ************************************ 00:18:55.726 END TEST nvmf_auth_target 00:18:55.726 ************************************ 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.726 ************************************ 00:18:55.726 START TEST nvmf_bdevio_no_huge 00:18:55.726 ************************************ 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:55.726 * Looking for test storage... 00:18:55.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:55.726 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:55.982 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:55.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.983 --rc genhtml_branch_coverage=1 00:18:55.983 --rc genhtml_function_coverage=1 00:18:55.983 --rc genhtml_legend=1 00:18:55.983 --rc geninfo_all_blocks=1 00:18:55.983 --rc geninfo_unexecuted_blocks=1 00:18:55.983 00:18:55.983 ' 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:55.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.983 --rc genhtml_branch_coverage=1 00:18:55.983 --rc genhtml_function_coverage=1 00:18:55.983 --rc genhtml_legend=1 00:18:55.983 --rc geninfo_all_blocks=1 00:18:55.983 --rc geninfo_unexecuted_blocks=1 00:18:55.983 00:18:55.983 ' 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:55.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.983 --rc genhtml_branch_coverage=1 00:18:55.983 --rc genhtml_function_coverage=1 00:18:55.983 --rc genhtml_legend=1 00:18:55.983 --rc geninfo_all_blocks=1 00:18:55.983 --rc geninfo_unexecuted_blocks=1 00:18:55.983 00:18:55.983 ' 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:55.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.983 --rc genhtml_branch_coverage=1 00:18:55.983 --rc genhtml_function_coverage=1 00:18:55.983 --rc genhtml_legend=1 00:18:55.983 --rc geninfo_all_blocks=1 00:18:55.983 --rc geninfo_unexecuted_blocks=1 00:18:55.983 00:18:55.983 ' 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.983 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:55.983 Cannot find device "nvmf_init_br" 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:55.983 Cannot find device "nvmf_init_br2" 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:55.983 Cannot find device "nvmf_tgt_br" 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:55.983 Cannot find device "nvmf_tgt_br2" 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:55.983 Cannot find device "nvmf_init_br" 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:55.983 Cannot find device "nvmf_init_br2" 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:55.983 Cannot find device "nvmf_tgt_br" 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:55.983 Cannot find device "nvmf_tgt_br2" 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:55.983 Cannot find device "nvmf_br" 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:55.983 Cannot find device "nvmf_init_if" 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:55.983 Cannot find device "nvmf_init_if2" 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:55.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:55.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:55.983 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:56.240 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:56.241 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:56.241 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:18:56.241 00:18:56.241 --- 10.0.0.3 ping statistics --- 00:18:56.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.241 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:56.241 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:56.241 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:18:56.241 00:18:56.241 --- 10.0.0.4 ping statistics --- 00:18:56.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.241 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:56.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:56.241 00:18:56.241 --- 10.0.0.1 ping statistics --- 00:18:56.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.241 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:56.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:18:56.241 00:18:56.241 --- 10.0.0.2 ping statistics --- 00:18:56.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.241 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=82693 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 82693 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 82693 ']' 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.241 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.497 [2024-12-06 13:12:44.918843] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:18:56.497 [2024-12-06 13:12:44.918958] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:56.497 [2024-12-06 13:12:45.109552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.754 [2024-12-06 13:12:45.192701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.754 [2024-12-06 13:12:45.192756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.754 [2024-12-06 13:12:45.192778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.754 [2024-12-06 13:12:45.192789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.754 [2024-12-06 13:12:45.192799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.754 [2024-12-06 13:12:45.193494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:56.754 [2024-12-06 13:12:45.193599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:56.754 [2024-12-06 13:12:45.193712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:56.754 [2024-12-06 13:12:45.193720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.685 [2024-12-06 13:12:46.123888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.685 Malloc0 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.685 [2024-12-06 13:12:46.164117] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:57.685 { 00:18:57.685 "params": { 00:18:57.685 "name": "Nvme$subsystem", 00:18:57.685 "trtype": "$TEST_TRANSPORT", 00:18:57.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:57.685 "adrfam": "ipv4", 00:18:57.685 "trsvcid": "$NVMF_PORT", 00:18:57.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:57.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:57.685 "hdgst": ${hdgst:-false}, 00:18:57.685 "ddgst": ${ddgst:-false} 00:18:57.685 }, 00:18:57.685 "method": "bdev_nvme_attach_controller" 00:18:57.685 } 00:18:57.685 EOF 00:18:57.685 )") 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:57.685 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:57.685 "params": { 00:18:57.685 "name": "Nvme1", 00:18:57.685 "trtype": "tcp", 00:18:57.685 "traddr": "10.0.0.3", 00:18:57.685 "adrfam": "ipv4", 00:18:57.685 "trsvcid": "4420", 00:18:57.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:57.685 "hdgst": false, 00:18:57.685 "ddgst": false 00:18:57.685 }, 00:18:57.685 "method": "bdev_nvme_attach_controller" 00:18:57.685 }' 00:18:57.685 [2024-12-06 13:12:46.221777] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:18:57.685 [2024-12-06 13:12:46.222300] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82747 ] 00:18:57.942 [2024-12-06 13:12:46.373747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:57.942 [2024-12-06 13:12:46.443413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.942 [2024-12-06 13:12:46.443527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.942 [2024-12-06 13:12:46.443531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.199 I/O targets: 00:18:58.199 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:58.199 00:18:58.199 00:18:58.199 CUnit - A unit testing framework for C - Version 2.1-3 00:18:58.199 http://cunit.sourceforge.net/ 00:18:58.199 00:18:58.199 00:18:58.199 Suite: bdevio tests on: Nvme1n1 00:18:58.199 Test: blockdev write read block ...passed 00:18:58.199 Test: blockdev write zeroes read block ...passed 00:18:58.199 Test: blockdev write zeroes read no split ...passed 00:18:58.199 Test: blockdev write zeroes read split ...passed 00:18:58.199 Test: blockdev write zeroes read split partial ...passed 00:18:58.199 Test: blockdev reset ...[2024-12-06 13:12:46.802561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:58.199 [2024-12-06 13:12:46.802701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ddeb0 (9): Bad file descriptor 00:18:58.199 [2024-12-06 13:12:46.815910] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:58.199 passed 00:18:58.199 Test: blockdev write read 8 blocks ...passed 00:18:58.199 Test: blockdev write read size > 128k ...passed 00:18:58.199 Test: blockdev write read invalid size ...passed 00:18:58.457 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:58.457 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:58.457 Test: blockdev write read max offset ...passed 00:18:58.457 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:58.457 Test: blockdev writev readv 8 blocks ...passed 00:18:58.457 Test: blockdev writev readv 30 x 1block ...passed 00:18:58.457 Test: blockdev writev readv block ...passed 00:18:58.457 Test: blockdev writev readv size > 128k ...passed 00:18:58.457 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:58.457 Test: blockdev comparev and writev ...[2024-12-06 13:12:46.990666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.457 [2024-12-06 13:12:46.990732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.457 [2024-12-06 13:12:46.990754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.457 [2024-12-06 13:12:46.990765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:58.457 [2024-12-06 13:12:46.991073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.457 [2024-12-06 13:12:46.991102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:58.457 [2024-12-06 13:12:46.991121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.457 [2024-12-06 13:12:46.991133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:58.457 [2024-12-06 13:12:46.991489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.457 [2024-12-06 13:12:46.991519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:58.457 [2024-12-06 13:12:46.991536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.457 [2024-12-06 13:12:46.991548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:58.457 [2024-12-06 13:12:46.991869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.457 [2024-12-06 13:12:46.991898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:58.457 [2024-12-06 13:12:46.991915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.457 [2024-12-06 13:12:46.991926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:58.457 passed 00:18:58.457 Test: blockdev nvme passthru rw ...passed 00:18:58.457 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:12:47.075872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:58.457 [2024-12-06 13:12:47.075928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:58.457 [2024-12-06 13:12:47.076315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:58.457 [2024-12-06 13:12:47.076344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:58.457 [2024-12-06 13:12:47.076580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:58.457 [2024-12-06 13:12:47.076614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:58.457 [2024-12-06 13:12:47.076839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:58.457 [2024-12-06 13:12:47.076871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:58.457 passed 00:18:58.457 Test: blockdev nvme admin passthru ...passed 00:18:58.716 Test: blockdev copy ...passed 00:18:58.716 00:18:58.716 Run Summary: Type Total Ran Passed Failed Inactive 00:18:58.716 suites 1 1 n/a 0 0 00:18:58.716 tests 23 23 23 0 0 00:18:58.716 asserts 152 152 152 0 n/a 00:18:58.716 00:18:58.716 Elapsed time = 0.936 seconds 00:18:58.975 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:58.975 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.975 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.975 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.975 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:58.975 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:58.975 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:58.975 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:58.975 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:58.975 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:58.975 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:58.975 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:59.233 rmmod nvme_tcp 00:18:59.233 rmmod nvme_fabrics 00:18:59.233 rmmod nvme_keyring 00:18:59.233 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:59.233 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:59.233 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:59.233 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 82693 ']' 00:18:59.233 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 82693 00:18:59.233 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 82693 ']' 00:18:59.233 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 82693 00:18:59.233 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:59.233 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.234 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82693 00:18:59.234 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:59.234 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:59.234 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82693' 00:18:59.234 killing process with pid 82693 00:18:59.234 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 82693 00:18:59.234 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 82693 00:18:59.492 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:59.492 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:59.492 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:59.492 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:59.492 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:59.492 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:59.492 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:59.492 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:59.492 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:59.492 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:59.492 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:18:59.751 00:18:59.751 real 0m4.072s 00:18:59.751 user 0m13.966s 00:18:59.751 sys 0m1.564s 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:59.751 ************************************ 00:18:59.751 END TEST nvmf_bdevio_no_huge 00:18:59.751 ************************************ 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:59.751 ************************************ 00:18:59.751 START TEST nvmf_tls 00:18:59.751 ************************************ 00:18:59.751 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:00.012 * Looking for test storage... 00:19:00.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:00.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.012 --rc genhtml_branch_coverage=1 00:19:00.012 --rc genhtml_function_coverage=1 00:19:00.012 --rc genhtml_legend=1 00:19:00.012 --rc geninfo_all_blocks=1 00:19:00.012 --rc geninfo_unexecuted_blocks=1 00:19:00.012 00:19:00.012 ' 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:00.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.012 --rc genhtml_branch_coverage=1 00:19:00.012 --rc genhtml_function_coverage=1 00:19:00.012 --rc genhtml_legend=1 00:19:00.012 --rc geninfo_all_blocks=1 00:19:00.012 --rc geninfo_unexecuted_blocks=1 00:19:00.012 00:19:00.012 ' 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:00.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.012 --rc genhtml_branch_coverage=1 00:19:00.012 --rc genhtml_function_coverage=1 00:19:00.012 --rc genhtml_legend=1 00:19:00.012 --rc geninfo_all_blocks=1 00:19:00.012 --rc geninfo_unexecuted_blocks=1 00:19:00.012 00:19:00.012 ' 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:00.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.012 --rc genhtml_branch_coverage=1 00:19:00.012 --rc genhtml_function_coverage=1 00:19:00.012 --rc genhtml_legend=1 00:19:00.012 --rc geninfo_all_blocks=1 00:19:00.012 --rc geninfo_unexecuted_blocks=1 00:19:00.012 00:19:00.012 ' 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.012 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:00.013 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:00.013 Cannot find device "nvmf_init_br" 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:00.013 Cannot find device "nvmf_init_br2" 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:00.013 Cannot find device "nvmf_tgt_br" 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:19:00.013 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:00.272 Cannot find device "nvmf_tgt_br2" 00:19:00.272 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:19:00.272 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:00.272 Cannot find device "nvmf_init_br" 00:19:00.272 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:19:00.272 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:00.272 Cannot find device "nvmf_init_br2" 00:19:00.272 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:19:00.272 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:00.272 Cannot find device "nvmf_tgt_br" 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:00.273 Cannot find device "nvmf_tgt_br2" 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:00.273 Cannot find device "nvmf_br" 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:00.273 Cannot find device "nvmf_init_if" 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:00.273 Cannot find device "nvmf_init_if2" 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:00.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:00.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:00.273 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:00.532 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:00.532 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:19:00.532 00:19:00.532 --- 10.0.0.3 ping statistics --- 00:19:00.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.532 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:00.532 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:00.532 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:19:00.532 00:19:00.532 --- 10.0.0.4 ping statistics --- 00:19:00.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.532 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:00.532 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:00.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:00.532 00:19:00.532 --- 10.0.0.1 ping statistics --- 00:19:00.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.532 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:00.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:19:00.532 00:19:00.532 --- 10.0.0.2 ping statistics --- 00:19:00.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.532 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82991 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82991 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82991 ']' 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.532 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.532 [2024-12-06 13:12:49.086443] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:00.532 [2024-12-06 13:12:49.087129] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.828 [2024-12-06 13:12:49.239575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.828 [2024-12-06 13:12:49.310615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.828 [2024-12-06 13:12:49.310674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.828 [2024-12-06 13:12:49.310687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.828 [2024-12-06 13:12:49.310698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.828 [2024-12-06 13:12:49.310709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.828 [2024-12-06 13:12:49.311194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.764 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.764 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:01.764 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.764 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.764 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.764 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.764 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:01.764 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:02.022 true 00:19:02.022 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:02.022 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:02.280 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:02.280 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:02.280 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:02.537 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:02.537 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:02.796 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:02.796 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:02.796 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:03.054 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.054 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:03.621 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:03.621 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:03.621 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.621 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:03.621 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:03.621 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:03.621 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:04.187 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:04.187 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:04.445 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:04.445 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:04.445 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:04.703 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:04.703 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.aP8DBLve03 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.IKKz2Pi7aC 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.aP8DBLve03 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.IKKz2Pi7aC 00:19:04.962 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:05.234 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:05.818 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.aP8DBLve03 00:19:05.818 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aP8DBLve03 00:19:05.818 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:06.132 [2024-12-06 13:12:54.457917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.132 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:06.393 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:06.393 [2024-12-06 13:12:55.022035] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.393 [2024-12-06 13:12:55.022292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:06.668 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:06.668 malloc0 00:19:06.933 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:07.191 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aP8DBLve03 00:19:07.450 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:07.708 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.aP8DBLve03 00:19:17.876 Initializing NVMe Controllers 00:19:17.876 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:17.876 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:17.876 Initialization complete. Launching workers. 00:19:17.876 ======================================================== 00:19:17.876 Latency(us) 00:19:17.876 Device Information : IOPS MiB/s Average min max 00:19:17.876 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9735.99 38.03 6575.05 1211.39 10177.27 00:19:17.876 ======================================================== 00:19:17.876 Total : 9735.99 38.03 6575.05 1211.39 10177.27 00:19:17.876 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aP8DBLve03 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aP8DBLve03 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83363 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83363 /var/tmp/bdevperf.sock 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83363 ']' 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.876 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.876 [2024-12-06 13:13:06.388125] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:17.877 [2024-12-06 13:13:06.389294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83363 ] 00:19:18.135 [2024-12-06 13:13:06.536576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.135 [2024-12-06 13:13:06.597839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.135 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.135 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:18.135 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aP8DBLve03 00:19:18.393 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:18.651 [2024-12-06 13:13:07.223547] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.910 TLSTESTn1 00:19:18.910 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:18.910 Running I/O for 10 seconds... 00:19:21.220 4076.00 IOPS, 15.92 MiB/s [2024-12-06T13:13:10.796Z] 4114.50 IOPS, 16.07 MiB/s [2024-12-06T13:13:11.742Z] 4122.67 IOPS, 16.10 MiB/s [2024-12-06T13:13:12.692Z] 4118.75 IOPS, 16.09 MiB/s [2024-12-06T13:13:13.625Z] 4130.80 IOPS, 16.14 MiB/s [2024-12-06T13:13:14.585Z] 4131.17 IOPS, 16.14 MiB/s [2024-12-06T13:13:15.516Z] 4136.86 IOPS, 16.16 MiB/s [2024-12-06T13:13:16.446Z] 4153.38 IOPS, 16.22 MiB/s [2024-12-06T13:13:17.821Z] 4163.11 IOPS, 16.26 MiB/s [2024-12-06T13:13:17.821Z] 4157.00 IOPS, 16.24 MiB/s 00:19:29.181 Latency(us) 00:19:29.181 [2024-12-06T13:13:17.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.181 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:29.181 Verification LBA range: start 0x0 length 0x2000 00:19:29.181 TLSTESTn1 : 10.02 4159.60 16.25 0.00 0.00 30702.46 9175.04 24307.90 00:19:29.181 [2024-12-06T13:13:17.821Z] =================================================================================================================== 00:19:29.181 [2024-12-06T13:13:17.821Z] Total : 4159.60 16.25 0.00 0.00 30702.46 9175.04 24307.90 00:19:29.181 { 00:19:29.181 "results": [ 00:19:29.181 { 00:19:29.181 "job": "TLSTESTn1", 00:19:29.181 "core_mask": "0x4", 00:19:29.181 "workload": "verify", 00:19:29.181 "status": "finished", 00:19:29.181 "verify_range": { 00:19:29.181 "start": 0, 00:19:29.181 "length": 8192 00:19:29.181 }, 00:19:29.181 "queue_depth": 128, 00:19:29.181 "io_size": 4096, 00:19:29.181 "runtime": 10.024043, 00:19:29.181 "iops": 4159.599075941714, 00:19:29.181 "mibps": 16.24843389039732, 00:19:29.181 "io_failed": 0, 00:19:29.181 "io_timeout": 0, 00:19:29.181 "avg_latency_us": 30702.463473104028, 00:19:29.181 "min_latency_us": 9175.04, 00:19:29.181 "max_latency_us": 24307.898181818182 00:19:29.181 } 00:19:29.181 ], 00:19:29.181 "core_count": 1 00:19:29.181 } 00:19:29.181 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.181 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83363 00:19:29.181 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83363 ']' 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83363 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83363 00:19:29.182 killing process with pid 83363 00:19:29.182 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.182 00:19:29.182 Latency(us) 00:19:29.182 [2024-12-06T13:13:17.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.182 [2024-12-06T13:13:17.822Z] =================================================================================================================== 00:19:29.182 [2024-12-06T13:13:17.822Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83363' 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83363 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83363 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IKKz2Pi7aC 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IKKz2Pi7aC 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IKKz2Pi7aC 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IKKz2Pi7aC 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83508 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83508 /var/tmp/bdevperf.sock 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83508 ']' 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.182 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.182 [2024-12-06 13:13:17.794621] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:29.182 [2024-12-06 13:13:17.794767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83508 ] 00:19:29.439 [2024-12-06 13:13:17.942445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.439 [2024-12-06 13:13:18.008366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.372 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.372 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:30.372 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IKKz2Pi7aC 00:19:30.630 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:30.889 [2024-12-06 13:13:19.355297] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.889 [2024-12-06 13:13:19.360784] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:30.889 [2024-12-06 13:13:19.361301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1558660 (107): Transport endpoint is not connected 00:19:30.889 [2024-12-06 13:13:19.362282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1558660 (9): Bad file descriptor 00:19:30.889 [2024-12-06 13:13:19.363278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:30.889 [2024-12-06 13:13:19.363310] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:19:30.889 [2024-12-06 13:13:19.363322] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:30.889 [2024-12-06 13:13:19.363341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:30.889 2024/12/06 13:13:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:30.889 request: 00:19:30.889 { 00:19:30.889 "method": "bdev_nvme_attach_controller", 00:19:30.889 "params": { 00:19:30.889 "name": "TLSTEST", 00:19:30.889 "trtype": "tcp", 00:19:30.889 "traddr": "10.0.0.3", 00:19:30.889 "adrfam": "ipv4", 00:19:30.889 "trsvcid": "4420", 00:19:30.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.889 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.889 "prchk_reftag": false, 00:19:30.889 "prchk_guard": false, 00:19:30.889 "hdgst": false, 00:19:30.889 "ddgst": false, 00:19:30.889 "psk": "key0", 00:19:30.889 "allow_unrecognized_csi": false 00:19:30.889 } 00:19:30.889 } 00:19:30.889 Got JSON-RPC error response 00:19:30.889 GoRPCClient: error on JSON-RPC call 00:19:30.889 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83508 00:19:30.889 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83508 ']' 00:19:30.889 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83508 00:19:30.889 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:30.889 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.889 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83508 00:19:30.889 killing process with pid 83508 00:19:30.889 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.889 00:19:30.889 Latency(us) 00:19:30.889 [2024-12-06T13:13:19.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.889 [2024-12-06T13:13:19.529Z] =================================================================================================================== 00:19:30.889 [2024-12-06T13:13:19.529Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:30.889 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:30.889 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:30.889 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83508' 00:19:30.889 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83508 00:19:30.889 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83508 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aP8DBLve03 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aP8DBLve03 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aP8DBLve03 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aP8DBLve03 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83566 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83566 /var/tmp/bdevperf.sock 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83566 ']' 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.163 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.163 [2024-12-06 13:13:19.693659] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:31.163 [2024-12-06 13:13:19.693761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83566 ] 00:19:31.452 [2024-12-06 13:13:19.841323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.452 [2024-12-06 13:13:19.905129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.386 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.386 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:32.386 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aP8DBLve03 00:19:32.643 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:32.902 [2024-12-06 13:13:21.302321] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.902 [2024-12-06 13:13:21.310807] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:32.902 [2024-12-06 13:13:21.310849] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:32.902 [2024-12-06 13:13:21.310900] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:32.902 [2024-12-06 13:13:21.311342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147f660 (107): Transport endpoint is not connected 00:19:32.902 [2024-12-06 13:13:21.312325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147f660 (9): Bad file descriptor 00:19:32.902 [2024-12-06 13:13:21.313321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:32.902 [2024-12-06 13:13:21.313358] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:19:32.902 [2024-12-06 13:13:21.313370] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:32.902 [2024-12-06 13:13:21.313387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:32.902 2024/12/06 13:13:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:32.902 request: 00:19:32.902 { 00:19:32.902 "method": "bdev_nvme_attach_controller", 00:19:32.902 "params": { 00:19:32.902 "name": "TLSTEST", 00:19:32.902 "trtype": "tcp", 00:19:32.902 "traddr": "10.0.0.3", 00:19:32.902 "adrfam": "ipv4", 00:19:32.902 "trsvcid": "4420", 00:19:32.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.902 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:32.902 "prchk_reftag": false, 00:19:32.902 "prchk_guard": false, 00:19:32.902 "hdgst": false, 00:19:32.902 "ddgst": false, 00:19:32.902 "psk": "key0", 00:19:32.902 "allow_unrecognized_csi": false 00:19:32.902 } 00:19:32.902 } 00:19:32.902 Got JSON-RPC error response 00:19:32.902 GoRPCClient: error on JSON-RPC call 00:19:32.902 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83566 00:19:32.902 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83566 ']' 00:19:32.902 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83566 00:19:32.902 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:32.902 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.902 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83566 00:19:32.902 killing process with pid 83566 00:19:32.902 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.902 00:19:32.902 Latency(us) 00:19:32.902 [2024-12-06T13:13:21.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.902 [2024-12-06T13:13:21.542Z] =================================================================================================================== 00:19:32.902 [2024-12-06T13:13:21.542Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.902 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:32.902 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:32.902 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83566' 00:19:32.902 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83566 00:19:32.902 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83566 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aP8DBLve03 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aP8DBLve03 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aP8DBLve03 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aP8DBLve03 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83619 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83619 /var/tmp/bdevperf.sock 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83619 ']' 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.160 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.160 [2024-12-06 13:13:21.618081] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:33.160 [2024-12-06 13:13:21.618199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83619 ] 00:19:33.160 [2024-12-06 13:13:21.766760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.419 [2024-12-06 13:13:21.840594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.419 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.419 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:33.419 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aP8DBLve03 00:19:33.678 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:33.937 [2024-12-06 13:13:22.552017] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.937 [2024-12-06 13:13:22.561406] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:33.937 [2024-12-06 13:13:22.561508] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:33.937 [2024-12-06 13:13:22.561611] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:33.937 [2024-12-06 13:13:22.562262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10fe660 (107): Transport endpoint is not connected 00:19:33.937 [2024-12-06 13:13:22.563245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10fe660 (9): Bad file descriptor 00:19:33.937 [2024-12-06 13:13:22.564241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:33.937 [2024-12-06 13:13:22.564273] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:19:33.937 [2024-12-06 13:13:22.564286] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:33.937 [2024-12-06 13:13:22.564313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:33.937 2024/12/06 13:13:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:33.937 request: 00:19:33.937 { 00:19:33.937 "method": "bdev_nvme_attach_controller", 00:19:33.937 "params": { 00:19:33.937 "name": "TLSTEST", 00:19:33.937 "trtype": "tcp", 00:19:33.937 "traddr": "10.0.0.3", 00:19:33.937 "adrfam": "ipv4", 00:19:33.937 "trsvcid": "4420", 00:19:33.937 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:33.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.937 "prchk_reftag": false, 00:19:33.937 "prchk_guard": false, 00:19:33.937 "hdgst": false, 00:19:33.937 "ddgst": false, 00:19:33.937 "psk": "key0", 00:19:33.937 "allow_unrecognized_csi": false 00:19:33.937 } 00:19:33.937 } 00:19:33.937 Got JSON-RPC error response 00:19:33.937 GoRPCClient: error on JSON-RPC call 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83619 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83619 ']' 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83619 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83619 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:34.196 killing process with pid 83619 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83619' 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83619 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83619 00:19:34.196 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.196 00:19:34.196 Latency(us) 00:19:34.196 [2024-12-06T13:13:22.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.196 [2024-12-06T13:13:22.836Z] =================================================================================================================== 00:19:34.196 [2024-12-06T13:13:22.836Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83658 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83658 /var/tmp/bdevperf.sock 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83658 ']' 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.196 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.455 [2024-12-06 13:13:22.874712] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:34.455 [2024-12-06 13:13:22.874815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83658 ] 00:19:34.455 [2024-12-06 13:13:23.016958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.455 [2024-12-06 13:13:23.080097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.760 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.760 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:34.760 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:35.034 [2024-12-06 13:13:23.476449] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:35.034 [2024-12-06 13:13:23.476523] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:35.034 2024/12/06 13:13:23 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:19:35.034 request: 00:19:35.034 { 00:19:35.034 "method": "keyring_file_add_key", 00:19:35.034 "params": { 00:19:35.034 "name": "key0", 00:19:35.034 "path": "" 00:19:35.034 } 00:19:35.034 } 00:19:35.034 Got JSON-RPC error response 00:19:35.034 GoRPCClient: error on JSON-RPC call 00:19:35.034 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:35.294 [2024-12-06 13:13:23.764672] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.294 [2024-12-06 13:13:23.764749] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:35.294 2024/12/06 13:13:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:19:35.294 request: 00:19:35.294 { 00:19:35.294 "method": "bdev_nvme_attach_controller", 00:19:35.294 "params": { 00:19:35.294 "name": "TLSTEST", 00:19:35.294 "trtype": "tcp", 00:19:35.294 "traddr": "10.0.0.3", 00:19:35.294 "adrfam": "ipv4", 00:19:35.294 "trsvcid": "4420", 00:19:35.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.294 "prchk_reftag": false, 00:19:35.294 "prchk_guard": false, 00:19:35.294 "hdgst": false, 00:19:35.294 "ddgst": false, 00:19:35.294 "psk": "key0", 00:19:35.294 "allow_unrecognized_csi": false 00:19:35.294 } 00:19:35.294 } 00:19:35.294 Got JSON-RPC error response 00:19:35.294 GoRPCClient: error on JSON-RPC call 00:19:35.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83658 00:19:35.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83658 ']' 00:19:35.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83658 00:19:35.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:35.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83658 00:19:35.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:35.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:35.294 killing process with pid 83658 00:19:35.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83658' 00:19:35.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83658 00:19:35.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83658 00:19:35.294 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.294 00:19:35.294 Latency(us) 00:19:35.294 [2024-12-06T13:13:23.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.294 [2024-12-06T13:13:23.934Z] =================================================================================================================== 00:19:35.294 [2024-12-06T13:13:23.934Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:35.553 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:35.553 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:35.553 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:35.553 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:35.553 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:35.553 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 82991 00:19:35.554 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82991 ']' 00:19:35.554 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82991 00:19:35.554 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:35.554 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.554 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82991 00:19:35.554 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:35.554 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:35.554 killing process with pid 82991 00:19:35.554 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82991' 00:19:35.554 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82991 00:19:35.554 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82991 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Tz0RFqROjd 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Tz0RFqROjd 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83712 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83712 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83712 ']' 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.813 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.813 [2024-12-06 13:13:24.383384] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:35.813 [2024-12-06 13:13:24.383537] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.072 [2024-12-06 13:13:24.534265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.072 [2024-12-06 13:13:24.592209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.072 [2024-12-06 13:13:24.592270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.072 [2024-12-06 13:13:24.592282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.072 [2024-12-06 13:13:24.592290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.072 [2024-12-06 13:13:24.592298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.072 [2024-12-06 13:13:24.592726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.331 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.331 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:36.331 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:36.331 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:36.331 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.331 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.331 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Tz0RFqROjd 00:19:36.331 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Tz0RFqROjd 00:19:36.331 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:36.593 [2024-12-06 13:13:25.066196] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.593 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:36.851 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:37.110 [2024-12-06 13:13:25.634371] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:37.111 [2024-12-06 13:13:25.634654] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:37.111 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:37.370 malloc0 00:19:37.370 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:37.629 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Tz0RFqROjd 00:19:37.888 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Tz0RFqROjd 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Tz0RFqROjd 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83814 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83814 /var/tmp/bdevperf.sock 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83814 ']' 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.462 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.462 [2024-12-06 13:13:26.865798] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:38.462 [2024-12-06 13:13:26.866372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83814 ] 00:19:38.462 [2024-12-06 13:13:27.014046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.462 [2024-12-06 13:13:27.084220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.394 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.394 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:39.394 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Tz0RFqROjd 00:19:39.652 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:39.916 [2024-12-06 13:13:28.448959] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.916 TLSTESTn1 00:19:39.916 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:40.174 Running I/O for 10 seconds... 00:19:42.109 3968.00 IOPS, 15.50 MiB/s [2024-12-06T13:13:31.683Z] 3968.00 IOPS, 15.50 MiB/s [2024-12-06T13:13:33.058Z] 3993.00 IOPS, 15.60 MiB/s [2024-12-06T13:13:33.994Z] 4005.50 IOPS, 15.65 MiB/s [2024-12-06T13:13:34.930Z] 4021.40 IOPS, 15.71 MiB/s [2024-12-06T13:13:35.865Z] 4028.33 IOPS, 15.74 MiB/s [2024-12-06T13:13:36.800Z] 4036.00 IOPS, 15.77 MiB/s [2024-12-06T13:13:37.736Z] 4038.00 IOPS, 15.77 MiB/s [2024-12-06T13:13:39.129Z] 4039.44 IOPS, 15.78 MiB/s [2024-12-06T13:13:39.129Z] 4040.40 IOPS, 15.78 MiB/s 00:19:50.489 Latency(us) 00:19:50.489 [2024-12-06T13:13:39.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.489 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.489 Verification LBA range: start 0x0 length 0x2000 00:19:50.489 TLSTESTn1 : 10.02 4046.25 15.81 0.00 0.00 31575.03 5957.82 23712.12 00:19:50.489 [2024-12-06T13:13:39.129Z] =================================================================================================================== 00:19:50.489 [2024-12-06T13:13:39.129Z] Total : 4046.25 15.81 0.00 0.00 31575.03 5957.82 23712.12 00:19:50.489 { 00:19:50.489 "results": [ 00:19:50.489 { 00:19:50.489 "job": "TLSTESTn1", 00:19:50.489 "core_mask": "0x4", 00:19:50.489 "workload": "verify", 00:19:50.489 "status": "finished", 00:19:50.489 "verify_range": { 00:19:50.489 "start": 0, 00:19:50.489 "length": 8192 00:19:50.489 }, 00:19:50.489 "queue_depth": 128, 00:19:50.489 "io_size": 4096, 00:19:50.489 "runtime": 10.016688, 00:19:50.489 "iops": 4046.247621968459, 00:19:50.489 "mibps": 15.805654773314293, 00:19:50.489 "io_failed": 0, 00:19:50.489 "io_timeout": 0, 00:19:50.489 "avg_latency_us": 31575.029227822266, 00:19:50.489 "min_latency_us": 5957.818181818182, 00:19:50.489 "max_latency_us": 23712.116363636364 00:19:50.489 } 00:19:50.489 ], 00:19:50.489 "core_count": 1 00:19:50.489 } 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83814 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83814 ']' 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83814 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83814 00:19:50.489 killing process with pid 83814 00:19:50.489 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.489 00:19:50.489 Latency(us) 00:19:50.489 [2024-12-06T13:13:39.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.489 [2024-12-06T13:13:39.129Z] =================================================================================================================== 00:19:50.489 [2024-12-06T13:13:39.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83814' 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83814 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83814 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Tz0RFqROjd 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Tz0RFqROjd 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Tz0RFqROjd 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Tz0RFqROjd 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Tz0RFqROjd 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83977 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.489 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.490 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83977 /var/tmp/bdevperf.sock 00:19:50.490 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83977 ']' 00:19:50.490 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.490 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.490 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.490 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.490 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.490 [2024-12-06 13:13:39.023190] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:50.490 [2024-12-06 13:13:39.023323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83977 ] 00:19:50.748 [2024-12-06 13:13:39.165096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.748 [2024-12-06 13:13:39.229211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.684 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.684 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:51.684 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Tz0RFqROjd 00:19:51.943 [2024-12-06 13:13:40.345596] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Tz0RFqROjd': 0100666 00:19:51.943 [2024-12-06 13:13:40.345646] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:51.943 2024/12/06 13:13:40 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.Tz0RFqROjd], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:19:51.943 request: 00:19:51.943 { 00:19:51.943 "method": "keyring_file_add_key", 00:19:51.943 "params": { 00:19:51.943 "name": "key0", 00:19:51.943 "path": "/tmp/tmp.Tz0RFqROjd" 00:19:51.943 } 00:19:51.943 } 00:19:51.943 Got JSON-RPC error response 00:19:51.943 GoRPCClient: error on JSON-RPC call 00:19:51.943 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:52.202 [2024-12-06 13:13:40.637764] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.202 [2024-12-06 13:13:40.637843] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:52.202 2024/12/06 13:13:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:19:52.202 request: 00:19:52.202 { 00:19:52.202 "method": "bdev_nvme_attach_controller", 00:19:52.202 "params": { 00:19:52.202 "name": "TLSTEST", 00:19:52.202 "trtype": "tcp", 00:19:52.202 "traddr": "10.0.0.3", 00:19:52.202 "adrfam": "ipv4", 00:19:52.202 "trsvcid": "4420", 00:19:52.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.202 "prchk_reftag": false, 00:19:52.202 "prchk_guard": false, 00:19:52.202 "hdgst": false, 00:19:52.202 "ddgst": false, 00:19:52.202 "psk": "key0", 00:19:52.202 "allow_unrecognized_csi": false 00:19:52.202 } 00:19:52.202 } 00:19:52.202 Got JSON-RPC error response 00:19:52.202 GoRPCClient: error on JSON-RPC call 00:19:52.202 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83977 00:19:52.202 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83977 ']' 00:19:52.202 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83977 00:19:52.202 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:52.202 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.202 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83977 00:19:52.202 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:52.202 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:52.202 killing process with pid 83977 00:19:52.202 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83977' 00:19:52.202 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83977 00:19:52.202 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83977 00:19:52.202 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.202 00:19:52.202 Latency(us) 00:19:52.202 [2024-12-06T13:13:40.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.202 [2024-12-06T13:13:40.842Z] =================================================================================================================== 00:19:52.202 [2024-12-06T13:13:40.843Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83712 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83712 ']' 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83712 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83712 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:52.462 killing process with pid 83712 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83712' 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83712 00:19:52.462 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83712 00:19:52.721 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:52.721 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.721 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.721 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.721 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84040 00:19:52.721 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:52.721 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84040 00:19:52.721 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84040 ']' 00:19:52.721 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.721 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.721 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.721 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.721 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.721 [2024-12-06 13:13:41.202240] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:52.721 [2024-12-06 13:13:41.202382] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.721 [2024-12-06 13:13:41.352943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.979 [2024-12-06 13:13:41.413098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.979 [2024-12-06 13:13:41.413165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.979 [2024-12-06 13:13:41.413176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.979 [2024-12-06 13:13:41.413184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.979 [2024-12-06 13:13:41.413192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.979 [2024-12-06 13:13:41.413606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.914 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.914 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:53.914 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.914 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.914 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.914 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.914 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Tz0RFqROjd 00:19:53.914 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:53.914 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Tz0RFqROjd 00:19:53.915 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:53.915 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.915 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:53.915 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.915 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Tz0RFqROjd 00:19:53.915 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Tz0RFqROjd 00:19:53.915 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:54.173 [2024-12-06 13:13:42.553601] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.173 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:54.432 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:54.691 [2024-12-06 13:13:43.165783] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.691 [2024-12-06 13:13:43.166029] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:54.691 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:54.950 malloc0 00:19:54.950 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:55.209 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Tz0RFqROjd 00:19:55.468 [2024-12-06 13:13:43.973638] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Tz0RFqROjd': 0100666 00:19:55.468 [2024-12-06 13:13:43.973680] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:55.468 2024/12/06 13:13:43 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.Tz0RFqROjd], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:19:55.468 request: 00:19:55.468 { 00:19:55.468 "method": "keyring_file_add_key", 00:19:55.468 "params": { 00:19:55.468 "name": "key0", 00:19:55.468 "path": "/tmp/tmp.Tz0RFqROjd" 00:19:55.468 } 00:19:55.468 } 00:19:55.468 Got JSON-RPC error response 00:19:55.468 GoRPCClient: error on JSON-RPC call 00:19:55.468 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:55.727 [2024-12-06 13:13:44.241740] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:55.727 [2024-12-06 13:13:44.241817] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:55.727 2024/12/06 13:13:44 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:19:55.727 request: 00:19:55.727 { 00:19:55.727 "method": "nvmf_subsystem_add_host", 00:19:55.727 "params": { 00:19:55.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.727 "host": "nqn.2016-06.io.spdk:host1", 00:19:55.727 "psk": "key0" 00:19:55.727 } 00:19:55.727 } 00:19:55.727 Got JSON-RPC error response 00:19:55.727 GoRPCClient: error on JSON-RPC call 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 84040 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84040 ']' 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84040 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84040 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:55.727 killing process with pid 84040 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84040' 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84040 00:19:55.727 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84040 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Tz0RFqROjd 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84163 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84163 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84163 ']' 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.985 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.985 [2024-12-06 13:13:44.573178] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:55.985 [2024-12-06 13:13:44.573307] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.244 [2024-12-06 13:13:44.723824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.244 [2024-12-06 13:13:44.790905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.244 [2024-12-06 13:13:44.790985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.244 [2024-12-06 13:13:44.791001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.244 [2024-12-06 13:13:44.791012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.244 [2024-12-06 13:13:44.791021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.244 [2024-12-06 13:13:44.791529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.502 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.502 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:56.502 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:56.502 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:56.502 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.502 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.502 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Tz0RFqROjd 00:19:56.502 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Tz0RFqROjd 00:19:56.502 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:56.758 [2024-12-06 13:13:45.248520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.758 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:57.015 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:57.272 [2024-12-06 13:13:45.792665] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.272 [2024-12-06 13:13:45.792931] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:57.272 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:57.543 malloc0 00:19:57.543 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:57.818 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Tz0RFqROjd 00:19:58.075 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:58.640 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=84265 00:19:58.640 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.640 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.640 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 84265 /var/tmp/bdevperf.sock 00:19:58.640 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84265 ']' 00:19:58.640 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.640 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.640 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.640 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.640 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.640 [2024-12-06 13:13:47.019891] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:58.640 [2024-12-06 13:13:47.019993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84265 ] 00:19:58.640 [2024-12-06 13:13:47.167224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.640 [2024-12-06 13:13:47.233944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.898 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.898 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:58.898 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Tz0RFqROjd 00:19:59.155 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:59.413 [2024-12-06 13:13:47.891326] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.413 TLSTESTn1 00:19:59.413 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:59.978 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:59.978 "subsystems": [ 00:19:59.978 { 00:19:59.978 "subsystem": "keyring", 00:19:59.978 "config": [ 00:19:59.978 { 00:19:59.978 "method": "keyring_file_add_key", 00:19:59.978 "params": { 00:19:59.978 "name": "key0", 00:19:59.978 "path": "/tmp/tmp.Tz0RFqROjd" 00:19:59.978 } 00:19:59.978 } 00:19:59.978 ] 00:19:59.978 }, 00:19:59.978 { 00:19:59.978 "subsystem": "iobuf", 00:19:59.978 "config": [ 00:19:59.978 { 00:19:59.978 "method": "iobuf_set_options", 00:19:59.978 "params": { 00:19:59.978 "enable_numa": false, 00:19:59.978 "large_bufsize": 135168, 00:19:59.978 "large_pool_count": 1024, 00:19:59.978 "small_bufsize": 8192, 00:19:59.978 "small_pool_count": 8192 00:19:59.978 } 00:19:59.978 } 00:19:59.978 ] 00:19:59.978 }, 00:19:59.978 { 00:19:59.979 "subsystem": "sock", 00:19:59.979 "config": [ 00:19:59.979 { 00:19:59.979 "method": "sock_set_default_impl", 00:19:59.979 "params": { 00:19:59.979 "impl_name": "posix" 00:19:59.979 } 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "method": "sock_impl_set_options", 00:19:59.979 "params": { 00:19:59.979 "enable_ktls": false, 00:19:59.979 "enable_placement_id": 0, 00:19:59.979 "enable_quickack": false, 00:19:59.979 "enable_recv_pipe": true, 00:19:59.979 "enable_zerocopy_send_client": false, 00:19:59.979 "enable_zerocopy_send_server": true, 00:19:59.979 "impl_name": "ssl", 00:19:59.979 "recv_buf_size": 4096, 00:19:59.979 "send_buf_size": 4096, 00:19:59.979 "tls_version": 0, 00:19:59.979 "zerocopy_threshold": 0 00:19:59.979 } 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "method": "sock_impl_set_options", 00:19:59.979 "params": { 00:19:59.979 "enable_ktls": false, 00:19:59.979 "enable_placement_id": 0, 00:19:59.979 "enable_quickack": false, 00:19:59.979 "enable_recv_pipe": true, 00:19:59.979 "enable_zerocopy_send_client": false, 00:19:59.979 "enable_zerocopy_send_server": true, 00:19:59.979 "impl_name": "posix", 00:19:59.979 "recv_buf_size": 2097152, 00:19:59.979 "send_buf_size": 2097152, 00:19:59.979 "tls_version": 0, 00:19:59.979 "zerocopy_threshold": 0 00:19:59.979 } 00:19:59.979 } 00:19:59.979 ] 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "subsystem": "vmd", 00:19:59.979 "config": [] 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "subsystem": "accel", 00:19:59.979 "config": [ 00:19:59.979 { 00:19:59.979 "method": "accel_set_options", 00:19:59.979 "params": { 00:19:59.979 "buf_count": 2048, 00:19:59.979 "large_cache_size": 16, 00:19:59.979 "sequence_count": 2048, 00:19:59.979 "small_cache_size": 128, 00:19:59.979 "task_count": 2048 00:19:59.979 } 00:19:59.979 } 00:19:59.979 ] 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "subsystem": "bdev", 00:19:59.979 "config": [ 00:19:59.979 { 00:19:59.979 "method": "bdev_set_options", 00:19:59.979 "params": { 00:19:59.979 "bdev_auto_examine": true, 00:19:59.979 "bdev_io_cache_size": 256, 00:19:59.979 "bdev_io_pool_size": 65535, 00:19:59.979 "iobuf_large_cache_size": 16, 00:19:59.979 "iobuf_small_cache_size": 128 00:19:59.979 } 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "method": "bdev_raid_set_options", 00:19:59.979 "params": { 00:19:59.979 "process_max_bandwidth_mb_sec": 0, 00:19:59.979 "process_window_size_kb": 1024 00:19:59.979 } 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "method": "bdev_iscsi_set_options", 00:19:59.979 "params": { 00:19:59.979 "timeout_sec": 30 00:19:59.979 } 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "method": "bdev_nvme_set_options", 00:19:59.979 "params": { 00:19:59.979 "action_on_timeout": "none", 00:19:59.979 "allow_accel_sequence": false, 00:19:59.979 "arbitration_burst": 0, 00:19:59.979 "bdev_retry_count": 3, 00:19:59.979 "ctrlr_loss_timeout_sec": 0, 00:19:59.979 "delay_cmd_submit": true, 00:19:59.979 "dhchap_dhgroups": [ 00:19:59.979 "null", 00:19:59.979 "ffdhe2048", 00:19:59.979 "ffdhe3072", 00:19:59.979 "ffdhe4096", 00:19:59.979 "ffdhe6144", 00:19:59.979 "ffdhe8192" 00:19:59.979 ], 00:19:59.979 "dhchap_digests": [ 00:19:59.979 "sha256", 00:19:59.979 "sha384", 00:19:59.979 "sha512" 00:19:59.979 ], 00:19:59.979 "disable_auto_failback": false, 00:19:59.979 "fast_io_fail_timeout_sec": 0, 00:19:59.979 "generate_uuids": false, 00:19:59.979 "high_priority_weight": 0, 00:19:59.979 "io_path_stat": false, 00:19:59.979 "io_queue_requests": 0, 00:19:59.979 "keep_alive_timeout_ms": 10000, 00:19:59.979 "low_priority_weight": 0, 00:19:59.979 "medium_priority_weight": 0, 00:19:59.979 "nvme_adminq_poll_period_us": 10000, 00:19:59.979 "nvme_error_stat": false, 00:19:59.979 "nvme_ioq_poll_period_us": 0, 00:19:59.979 "rdma_cm_event_timeout_ms": 0, 00:19:59.979 "rdma_max_cq_size": 0, 00:19:59.979 "rdma_srq_size": 0, 00:19:59.979 "reconnect_delay_sec": 0, 00:19:59.979 "timeout_admin_us": 0, 00:19:59.979 "timeout_us": 0, 00:19:59.979 "transport_ack_timeout": 0, 00:19:59.979 "transport_retry_count": 4, 00:19:59.979 "transport_tos": 0 00:19:59.979 } 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "method": "bdev_nvme_set_hotplug", 00:19:59.979 "params": { 00:19:59.979 "enable": false, 00:19:59.979 "period_us": 100000 00:19:59.979 } 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "method": "bdev_malloc_create", 00:19:59.979 "params": { 00:19:59.979 "block_size": 4096, 00:19:59.979 "dif_is_head_of_md": false, 00:19:59.979 "dif_pi_format": 0, 00:19:59.979 "dif_type": 0, 00:19:59.979 "md_size": 0, 00:19:59.979 "name": "malloc0", 00:19:59.979 "num_blocks": 8192, 00:19:59.979 "optimal_io_boundary": 0, 00:19:59.979 "physical_block_size": 4096, 00:19:59.979 "uuid": "fc9fa1cb-3118-4b27-b766-6e4503733164" 00:19:59.979 } 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "method": "bdev_wait_for_examine" 00:19:59.979 } 00:19:59.979 ] 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "subsystem": "nbd", 00:19:59.979 "config": [] 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "subsystem": "scheduler", 00:19:59.979 "config": [ 00:19:59.979 { 00:19:59.979 "method": "framework_set_scheduler", 00:19:59.979 "params": { 00:19:59.979 "name": "static" 00:19:59.979 } 00:19:59.979 } 00:19:59.979 ] 00:19:59.979 }, 00:19:59.979 { 00:19:59.979 "subsystem": "nvmf", 00:19:59.979 "config": [ 00:19:59.979 { 00:19:59.979 "method": "nvmf_set_config", 00:19:59.979 "params": { 00:19:59.979 "admin_cmd_passthru": { 00:19:59.979 "identify_ctrlr": false 00:19:59.979 }, 00:19:59.979 "dhchap_dhgroups": [ 00:19:59.979 "null", 00:19:59.979 "ffdhe2048", 00:19:59.979 "ffdhe3072", 00:19:59.979 "ffdhe4096", 00:19:59.980 "ffdhe6144", 00:19:59.980 "ffdhe8192" 00:19:59.980 ], 00:19:59.980 "dhchap_digests": [ 00:19:59.980 "sha256", 00:19:59.980 "sha384", 00:19:59.980 "sha512" 00:19:59.980 ], 00:19:59.980 "discovery_filter": "match_any" 00:19:59.980 } 00:19:59.980 }, 00:19:59.980 { 00:19:59.980 "method": "nvmf_set_max_subsystems", 00:19:59.980 "params": { 00:19:59.980 "max_subsystems": 1024 00:19:59.980 } 00:19:59.980 }, 00:19:59.980 { 00:19:59.980 "method": "nvmf_set_crdt", 00:19:59.980 "params": { 00:19:59.980 "crdt1": 0, 00:19:59.980 "crdt2": 0, 00:19:59.980 "crdt3": 0 00:19:59.980 } 00:19:59.980 }, 00:19:59.980 { 00:19:59.980 "method": "nvmf_create_transport", 00:19:59.980 "params": { 00:19:59.980 "abort_timeout_sec": 1, 00:19:59.980 "ack_timeout": 0, 00:19:59.980 "buf_cache_size": 4294967295, 00:19:59.980 "c2h_success": false, 00:19:59.980 "data_wr_pool_size": 0, 00:19:59.980 "dif_insert_or_strip": false, 00:19:59.980 "in_capsule_data_size": 4096, 00:19:59.980 "io_unit_size": 131072, 00:19:59.980 "max_aq_depth": 128, 00:19:59.980 "max_io_qpairs_per_ctrlr": 127, 00:19:59.980 "max_io_size": 131072, 00:19:59.980 "max_queue_depth": 128, 00:19:59.980 "num_shared_buffers": 511, 00:19:59.980 "sock_priority": 0, 00:19:59.980 "trtype": "TCP", 00:19:59.980 "zcopy": false 00:19:59.980 } 00:19:59.980 }, 00:19:59.980 { 00:19:59.980 "method": "nvmf_create_subsystem", 00:19:59.980 "params": { 00:19:59.980 "allow_any_host": false, 00:19:59.980 "ana_reporting": false, 00:19:59.980 "max_cntlid": 65519, 00:19:59.980 "max_namespaces": 10, 00:19:59.980 "min_cntlid": 1, 00:19:59.980 "model_number": "SPDK bdev Controller", 00:19:59.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.980 "serial_number": "SPDK00000000000001" 00:19:59.980 } 00:19:59.980 }, 00:19:59.980 { 00:19:59.980 "method": "nvmf_subsystem_add_host", 00:19:59.980 "params": { 00:19:59.980 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.980 "psk": "key0" 00:19:59.980 } 00:19:59.980 }, 00:19:59.980 { 00:19:59.980 "method": "nvmf_subsystem_add_ns", 00:19:59.980 "params": { 00:19:59.980 "namespace": { 00:19:59.980 "bdev_name": "malloc0", 00:19:59.980 "nguid": "FC9FA1CB31184B27B7666E4503733164", 00:19:59.980 "no_auto_visible": false, 00:19:59.980 "nsid": 1, 00:19:59.980 "uuid": "fc9fa1cb-3118-4b27-b766-6e4503733164" 00:19:59.980 }, 00:19:59.980 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:19:59.980 } 00:19:59.980 }, 00:19:59.980 { 00:19:59.980 "method": "nvmf_subsystem_add_listener", 00:19:59.980 "params": { 00:19:59.980 "listen_address": { 00:19:59.980 "adrfam": "IPv4", 00:19:59.980 "traddr": "10.0.0.3", 00:19:59.980 "trsvcid": "4420", 00:19:59.980 "trtype": "TCP" 00:19:59.980 }, 00:19:59.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.980 "secure_channel": true 00:19:59.980 } 00:19:59.980 } 00:19:59.980 ] 00:19:59.980 } 00:19:59.980 ] 00:19:59.980 }' 00:19:59.980 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:00.239 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:00.239 "subsystems": [ 00:20:00.239 { 00:20:00.239 "subsystem": "keyring", 00:20:00.239 "config": [ 00:20:00.239 { 00:20:00.239 "method": "keyring_file_add_key", 00:20:00.239 "params": { 00:20:00.239 "name": "key0", 00:20:00.239 "path": "/tmp/tmp.Tz0RFqROjd" 00:20:00.239 } 00:20:00.239 } 00:20:00.239 ] 00:20:00.239 }, 00:20:00.239 { 00:20:00.239 "subsystem": "iobuf", 00:20:00.239 "config": [ 00:20:00.239 { 00:20:00.239 "method": "iobuf_set_options", 00:20:00.239 "params": { 00:20:00.239 "enable_numa": false, 00:20:00.239 "large_bufsize": 135168, 00:20:00.239 "large_pool_count": 1024, 00:20:00.239 "small_bufsize": 8192, 00:20:00.240 "small_pool_count": 8192 00:20:00.240 } 00:20:00.240 } 00:20:00.240 ] 00:20:00.240 }, 00:20:00.240 { 00:20:00.240 "subsystem": "sock", 00:20:00.240 "config": [ 00:20:00.240 { 00:20:00.240 "method": "sock_set_default_impl", 00:20:00.240 "params": { 00:20:00.240 "impl_name": "posix" 00:20:00.240 } 00:20:00.240 }, 00:20:00.240 { 00:20:00.240 "method": "sock_impl_set_options", 00:20:00.240 "params": { 00:20:00.240 "enable_ktls": false, 00:20:00.240 "enable_placement_id": 0, 00:20:00.240 "enable_quickack": false, 00:20:00.240 "enable_recv_pipe": true, 00:20:00.240 "enable_zerocopy_send_client": false, 00:20:00.240 "enable_zerocopy_send_server": true, 00:20:00.240 "impl_name": "ssl", 00:20:00.240 "recv_buf_size": 4096, 00:20:00.240 "send_buf_size": 4096, 00:20:00.240 "tls_version": 0, 00:20:00.240 "zerocopy_threshold": 0 00:20:00.240 } 00:20:00.240 }, 00:20:00.240 { 00:20:00.240 "method": "sock_impl_set_options", 00:20:00.240 "params": { 00:20:00.240 "enable_ktls": false, 00:20:00.240 "enable_placement_id": 0, 00:20:00.240 "enable_quickack": false, 00:20:00.240 "enable_recv_pipe": true, 00:20:00.240 "enable_zerocopy_send_client": false, 00:20:00.240 "enable_zerocopy_send_server": true, 00:20:00.240 "impl_name": "posix", 00:20:00.240 "recv_buf_size": 2097152, 00:20:00.240 "send_buf_size": 2097152, 00:20:00.240 "tls_version": 0, 00:20:00.240 "zerocopy_threshold": 0 00:20:00.240 } 00:20:00.240 } 00:20:00.240 ] 00:20:00.240 }, 00:20:00.240 { 00:20:00.240 "subsystem": "vmd", 00:20:00.240 "config": [] 00:20:00.240 }, 00:20:00.240 { 00:20:00.240 "subsystem": "accel", 00:20:00.240 "config": [ 00:20:00.240 { 00:20:00.240 "method": "accel_set_options", 00:20:00.240 "params": { 00:20:00.240 "buf_count": 2048, 00:20:00.240 "large_cache_size": 16, 00:20:00.240 "sequence_count": 2048, 00:20:00.240 "small_cache_size": 128, 00:20:00.240 "task_count": 2048 00:20:00.240 } 00:20:00.240 } 00:20:00.240 ] 00:20:00.240 }, 00:20:00.240 { 00:20:00.240 "subsystem": "bdev", 00:20:00.240 "config": [ 00:20:00.240 { 00:20:00.240 "method": "bdev_set_options", 00:20:00.240 "params": { 00:20:00.240 "bdev_auto_examine": true, 00:20:00.240 "bdev_io_cache_size": 256, 00:20:00.240 "bdev_io_pool_size": 65535, 00:20:00.240 "iobuf_large_cache_size": 16, 00:20:00.240 "iobuf_small_cache_size": 128 00:20:00.240 } 00:20:00.240 }, 00:20:00.240 { 00:20:00.240 "method": "bdev_raid_set_options", 00:20:00.240 "params": { 00:20:00.240 "process_max_bandwidth_mb_sec": 0, 00:20:00.240 "process_window_size_kb": 1024 00:20:00.240 } 00:20:00.240 }, 00:20:00.240 { 00:20:00.240 "method": "bdev_iscsi_set_options", 00:20:00.240 "params": { 00:20:00.240 "timeout_sec": 30 00:20:00.240 } 00:20:00.240 }, 00:20:00.240 { 00:20:00.240 "method": "bdev_nvme_set_options", 00:20:00.240 "params": { 00:20:00.240 "action_on_timeout": "none", 00:20:00.240 "allow_accel_sequence": false, 00:20:00.240 "arbitration_burst": 0, 00:20:00.240 "bdev_retry_count": 3, 00:20:00.240 "ctrlr_loss_timeout_sec": 0, 00:20:00.240 "delay_cmd_submit": true, 00:20:00.240 "dhchap_dhgroups": [ 00:20:00.240 "null", 00:20:00.240 "ffdhe2048", 00:20:00.240 "ffdhe3072", 00:20:00.240 "ffdhe4096", 00:20:00.240 "ffdhe6144", 00:20:00.240 "ffdhe8192" 00:20:00.240 ], 00:20:00.240 "dhchap_digests": [ 00:20:00.240 "sha256", 00:20:00.240 "sha384", 00:20:00.240 "sha512" 00:20:00.240 ], 00:20:00.240 "disable_auto_failback": false, 00:20:00.240 "fast_io_fail_timeout_sec": 0, 00:20:00.240 "generate_uuids": false, 00:20:00.240 "high_priority_weight": 0, 00:20:00.240 "io_path_stat": false, 00:20:00.240 "io_queue_requests": 512, 00:20:00.240 "keep_alive_timeout_ms": 10000, 00:20:00.240 "low_priority_weight": 0, 00:20:00.240 "medium_priority_weight": 0, 00:20:00.240 "nvme_adminq_poll_period_us": 10000, 00:20:00.240 "nvme_error_stat": false, 00:20:00.240 "nvme_ioq_poll_period_us": 0, 00:20:00.240 "rdma_cm_event_timeout_ms": 0, 00:20:00.240 "rdma_max_cq_size": 0, 00:20:00.240 "rdma_srq_size": 0, 00:20:00.240 "reconnect_delay_sec": 0, 00:20:00.240 "timeout_admin_us": 0, 00:20:00.240 "timeout_us": 0, 00:20:00.240 "transport_ack_timeout": 0, 00:20:00.240 "transport_retry_count": 4, 00:20:00.240 "transport_tos": 0 00:20:00.240 } 00:20:00.240 }, 00:20:00.240 { 00:20:00.240 "method": "bdev_nvme_attach_controller", 00:20:00.240 "params": { 00:20:00.240 "adrfam": "IPv4", 00:20:00.240 "ctrlr_loss_timeout_sec": 0, 00:20:00.240 "ddgst": false, 00:20:00.240 "fast_io_fail_timeout_sec": 0, 00:20:00.240 "hdgst": false, 00:20:00.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.240 "multipath": "multipath", 00:20:00.240 "name": "TLSTEST", 00:20:00.240 "prchk_guard": false, 00:20:00.240 "prchk_reftag": false, 00:20:00.240 "psk": "key0", 00:20:00.240 "reconnect_delay_sec": 0, 00:20:00.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.240 "traddr": "10.0.0.3", 00:20:00.240 "trsvcid": "4420", 00:20:00.240 "trtype": "TCP" 00:20:00.240 } 00:20:00.240 }, 00:20:00.240 { 00:20:00.240 "method": "bdev_nvme_set_hotplug", 00:20:00.240 "params": { 00:20:00.240 "enable": false, 00:20:00.240 "period_us": 100000 00:20:00.240 } 00:20:00.240 }, 00:20:00.240 { 00:20:00.240 "method": "bdev_wait_for_examine" 00:20:00.240 } 00:20:00.240 ] 00:20:00.240 }, 00:20:00.240 { 00:20:00.240 "subsystem": "nbd", 00:20:00.240 "config": [] 00:20:00.240 } 00:20:00.240 ] 00:20:00.240 }' 00:20:00.240 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 84265 00:20:00.240 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84265 ']' 00:20:00.240 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84265 00:20:00.240 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:00.240 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.240 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84265 00:20:00.240 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:00.240 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:00.240 killing process with pid 84265 00:20:00.240 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84265' 00:20:00.240 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84265 00:20:00.240 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.240 00:20:00.240 Latency(us) 00:20:00.240 [2024-12-06T13:13:48.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.240 [2024-12-06T13:13:48.880Z] =================================================================================================================== 00:20:00.240 [2024-12-06T13:13:48.880Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.240 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84265 00:20:00.498 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 84163 00:20:00.498 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84163 ']' 00:20:00.498 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84163 00:20:00.498 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:00.498 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.498 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84163 00:20:00.498 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:00.498 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:00.498 killing process with pid 84163 00:20:00.498 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84163' 00:20:00.498 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84163 00:20:00.498 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84163 00:20:00.756 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:00.756 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:00.756 "subsystems": [ 00:20:00.756 { 00:20:00.756 "subsystem": "keyring", 00:20:00.756 "config": [ 00:20:00.756 { 00:20:00.756 "method": "keyring_file_add_key", 00:20:00.756 "params": { 00:20:00.756 "name": "key0", 00:20:00.756 "path": "/tmp/tmp.Tz0RFqROjd" 00:20:00.756 } 00:20:00.756 } 00:20:00.756 ] 00:20:00.756 }, 00:20:00.756 { 00:20:00.756 "subsystem": "iobuf", 00:20:00.756 "config": [ 00:20:00.756 { 00:20:00.756 "method": "iobuf_set_options", 00:20:00.757 "params": { 00:20:00.757 "enable_numa": false, 00:20:00.757 "large_bufsize": 135168, 00:20:00.757 "large_pool_count": 1024, 00:20:00.757 "small_bufsize": 8192, 00:20:00.757 "small_pool_count": 8192 00:20:00.757 } 00:20:00.757 } 00:20:00.757 ] 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "subsystem": "sock", 00:20:00.757 "config": [ 00:20:00.757 { 00:20:00.757 "method": "sock_set_default_impl", 00:20:00.757 "params": { 00:20:00.757 "impl_name": "posix" 00:20:00.757 } 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "method": "sock_impl_set_options", 00:20:00.757 "params": { 00:20:00.757 "enable_ktls": false, 00:20:00.757 "enable_placement_id": 0, 00:20:00.757 "enable_quickack": false, 00:20:00.757 "enable_recv_pipe": true, 00:20:00.757 "enable_zerocopy_send_client": false, 00:20:00.757 "enable_zerocopy_send_server": true, 00:20:00.757 "impl_name": "ssl", 00:20:00.757 "recv_buf_size": 4096, 00:20:00.757 "send_buf_size": 4096, 00:20:00.757 "tls_version": 0, 00:20:00.757 "zerocopy_threshold": 0 00:20:00.757 } 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "method": "sock_impl_set_options", 00:20:00.757 "params": { 00:20:00.757 "enable_ktls": false, 00:20:00.757 "enable_placement_id": 0, 00:20:00.757 "enable_quickack": false, 00:20:00.757 "enable_recv_pipe": true, 00:20:00.757 "enable_zerocopy_send_client": false, 00:20:00.757 "enable_zerocopy_send_server": true, 00:20:00.757 "impl_name": "posix", 00:20:00.757 "recv_buf_size": 2097152, 00:20:00.757 "send_buf_size": 2097152, 00:20:00.757 "tls_version": 0, 00:20:00.757 "zerocopy_threshold": 0 00:20:00.757 } 00:20:00.757 } 00:20:00.757 ] 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "subsystem": "vmd", 00:20:00.757 "config": [] 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "subsystem": "accel", 00:20:00.757 "config": [ 00:20:00.757 { 00:20:00.757 "method": "accel_set_options", 00:20:00.757 "params": { 00:20:00.757 "buf_count": 2048, 00:20:00.757 "large_cache_size": 16, 00:20:00.757 "sequence_count": 2048, 00:20:00.757 "small_cache_size": 128, 00:20:00.757 "task_count": 2048 00:20:00.757 } 00:20:00.757 } 00:20:00.757 ] 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "subsystem": "bdev", 00:20:00.757 "config": [ 00:20:00.757 { 00:20:00.757 "method": "bdev_set_options", 00:20:00.757 "params": { 00:20:00.757 "bdev_auto_examine": true, 00:20:00.757 "bdev_io_cache_size": 256, 00:20:00.757 "bdev_io_pool_size": 65535, 00:20:00.757 "iobuf_large_cache_size": 16, 00:20:00.757 "iobuf_small_cache_size": 128 00:20:00.757 } 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "method": "bdev_raid_set_options", 00:20:00.757 "params": { 00:20:00.757 "process_max_bandwidth_mb_sec": 0, 00:20:00.757 "process_window_size_kb": 1024 00:20:00.757 } 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "method": "bdev_iscsi_set_options", 00:20:00.757 "params": { 00:20:00.757 "timeout_sec": 30 00:20:00.757 } 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "method": "bdev_nvme_set_options", 00:20:00.757 "params": { 00:20:00.757 "action_on_timeout": "none", 00:20:00.757 "allow_accel_sequence": false, 00:20:00.757 "arbitration_burst": 0, 00:20:00.757 "bdev_retry_count": 3, 00:20:00.757 "ctrlr_loss_timeout_sec": 0, 00:20:00.757 "delay_cmd_submit": true, 00:20:00.757 "dhchap_dhgroups": [ 00:20:00.757 "null", 00:20:00.757 "ffdhe2048", 00:20:00.757 "ffdhe3072", 00:20:00.757 "ffdhe4096", 00:20:00.757 "ffdhe6144", 00:20:00.757 "ffdhe8192" 00:20:00.757 ], 00:20:00.757 "dhchap_digests": [ 00:20:00.757 "sha256", 00:20:00.757 "sha384", 00:20:00.757 "sha512" 00:20:00.757 ], 00:20:00.757 "disable_auto_failback": false, 00:20:00.757 "fast_io_fail_timeout_sec": 0, 00:20:00.757 "generate_uuids": false, 00:20:00.757 "high_priority_weight": 0, 00:20:00.757 "io_path_stat": false, 00:20:00.757 "io_queue_requests": 0, 00:20:00.757 "keep_alive_timeout_ms": 10000, 00:20:00.757 "low_priority_weight": 0, 00:20:00.757 "medium_priority_weight": 0, 00:20:00.757 "nvme_adminq_poll_period_us": 10000, 00:20:00.757 "nvme_error_stat": false, 00:20:00.757 "nvme_ioq_poll_period_us": 0, 00:20:00.757 "rdma_cm_event_timeout_ms": 0, 00:20:00.757 "rdma_max_cq_size": 0, 00:20:00.757 "rdma_srq_size": 0, 00:20:00.757 "reconnect_delay_sec": 0, 00:20:00.757 "timeout_admin_us": 0, 00:20:00.757 "timeout_us": 0, 00:20:00.757 "transport_ack_timeout": 0, 00:20:00.757 "transport_retry_count": 4, 00:20:00.757 "transport_tos": 0 00:20:00.757 } 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "method": "bdev_nvme_set_hotplug", 00:20:00.757 "params": { 00:20:00.757 "enable": false, 00:20:00.757 "period_us": 100000 00:20:00.757 } 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "method": "bdev_malloc_create", 00:20:00.757 "params": { 00:20:00.757 "block_size": 4096, 00:20:00.757 "dif_is_head_of_md": false, 00:20:00.757 "dif_pi_format": 0, 00:20:00.757 "dif_type": 0, 00:20:00.757 "md_size": 0, 00:20:00.757 "name": "malloc0", 00:20:00.757 "num_blocks": 8192, 00:20:00.757 "optimal_io_boundary": 0, 00:20:00.757 "physical_block_size": 4096, 00:20:00.757 "uuid": "fc9fa1cb-3118-4b27-b766-6e4503733164" 00:20:00.757 } 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "method": "bdev_wait_for_examine" 00:20:00.757 } 00:20:00.757 ] 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "subsystem": "nbd", 00:20:00.757 "config": [] 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "subsystem": "scheduler", 00:20:00.757 "config": [ 00:20:00.757 { 00:20:00.757 "method": "framework_set_scheduler", 00:20:00.757 "params": { 00:20:00.757 "name": "static" 00:20:00.757 } 00:20:00.757 } 00:20:00.757 ] 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "subsystem": "nvmf", 00:20:00.757 "config": [ 00:20:00.757 { 00:20:00.757 "method": "nvmf_set_config", 00:20:00.757 "params": { 00:20:00.757 "admin_cmd_passthru": { 00:20:00.757 "identify_ctrlr": false 00:20:00.757 }, 00:20:00.757 "dhchap_dhgroups": [ 00:20:00.757 "null", 00:20:00.757 "ffdhe2048", 00:20:00.757 "ffdhe3072", 00:20:00.757 "ffdhe4096", 00:20:00.757 "ffdhe6144", 00:20:00.757 "ffdhe8192" 00:20:00.757 ], 00:20:00.757 "dhchap_digests": [ 00:20:00.757 "sha256", 00:20:00.757 "sha384", 00:20:00.757 "sha512" 00:20:00.757 ], 00:20:00.757 "discovery_filter": "match_any" 00:20:00.757 } 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "method": "nvmf_set_max_subsystems", 00:20:00.757 "params": { 00:20:00.757 "max_subsystems": 1024 00:20:00.757 } 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "method": "nvmf_set_crdt", 00:20:00.757 "params": { 00:20:00.757 "crdt1": 0, 00:20:00.757 "crdt2": 0, 00:20:00.757 "crdt3": 0 00:20:00.757 } 00:20:00.757 }, 00:20:00.757 { 00:20:00.757 "method": "nvmf_create_transport", 00:20:00.757 "params": { 00:20:00.757 "abort_timeout_sec": 1, 00:20:00.757 "ack_timeout": 0, 00:20:00.757 "buf_cache_size": 4294967295, 00:20:00.757 "c2h_success": false, 00:20:00.757 "data_wr_pool_size": 0, 00:20:00.757 "dif_insert_or_strip": false, 00:20:00.757 "in_capsule_data_size": 4096, 00:20:00.757 "io_unit_size": 131072, 00:20:00.757 "max_aq_depth": 128, 00:20:00.757 "max_io_qpairs_per_ctrlr": 127, 00:20:00.757 "max_io_size": 131072, 00:20:00.757 "max_queue_depth": 128, 00:20:00.757 "num_shared_buffers": 511, 00:20:00.757 "sock_priority": 0, 00:20:00.757 "trtype": "TCP", 00:20:00.758 "zcopy": false 00:20:00.758 } 00:20:00.758 }, 00:20:00.758 { 00:20:00.758 "method": "nvmf_create_subsystem", 00:20:00.758 "params": { 00:20:00.758 "allow_any_host": false, 00:20:00.758 "ana_reporting": false, 00:20:00.758 "max_cntlid": 65519, 00:20:00.758 "max_namespaces": 10, 00:20:00.758 "min_cntlid": 1, 00:20:00.758 "model_number": "SPDK bdev Controller", 00:20:00.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.758 "serial_number": "SPDK00000000000001" 00:20:00.758 } 00:20:00.758 }, 00:20:00.758 { 00:20:00.758 "method": "nvmf_subsystem_add_host", 00:20:00.758 "params": { 00:20:00.758 "host": "nqn.2016-06.io.spdk:host1", 00:20:00.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.758 "psk": "key0" 00:20:00.758 } 00:20:00.758 }, 00:20:00.758 { 00:20:00.758 "method": "nvmf_subsystem_add_ns", 00:20:00.758 "params": { 00:20:00.758 "namespace": { 00:20:00.758 "bdev_name": "malloc0", 00:20:00.758 "nguid": "FC9FA1CB31184B27B7666E4503733164", 00:20:00.758 "no_auto_visible": false, 00:20:00.758 "nsid": 1, 00:20:00.758 "uuid": "fc9fa1cb-3118-4b27-b766-6e4503733164" 00:20:00.758 }, 00:20:00.758 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:00.758 } 00:20:00.758 }, 00:20:00.758 { 00:20:00.758 "method": "nvmf_subsystem_add_listener", 00:20:00.758 "params": { 00:20:00.758 "listen_address": { 00:20:00.758 "adrfam": "IPv4", 00:20:00.758 "traddr": "10.0.0.3", 00:20:00.758 "trsvcid": "4420", 00:20:00.758 "trtype": "TCP" 00:20:00.758 }, 00:20:00.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.758 "secure_channel": true 00:20:00.758 } 00:20:00.758 } 00:20:00.758 ] 00:20:00.758 } 00:20:00.758 ] 00:20:00.758 }' 00:20:00.758 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.758 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.758 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.758 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84337 00:20:00.758 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:00.758 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84337 00:20:00.758 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84337 ']' 00:20:00.758 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.758 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.758 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.758 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.758 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.758 [2024-12-06 13:13:49.228137] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:00.758 [2024-12-06 13:13:49.228243] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.758 [2024-12-06 13:13:49.373620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.016 [2024-12-06 13:13:49.436753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.016 [2024-12-06 13:13:49.436830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.016 [2024-12-06 13:13:49.436843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.016 [2024-12-06 13:13:49.436851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.016 [2024-12-06 13:13:49.436859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.016 [2024-12-06 13:13:49.437358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.275 [2024-12-06 13:13:49.682253] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.275 [2024-12-06 13:13:49.714194] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.275 [2024-12-06 13:13:49.714425] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84381 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84381 /var/tmp/bdevperf.sock 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84381 ']' 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:01.842 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:01.842 "subsystems": [ 00:20:01.842 { 00:20:01.842 "subsystem": "keyring", 00:20:01.842 "config": [ 00:20:01.842 { 00:20:01.842 "method": "keyring_file_add_key", 00:20:01.842 "params": { 00:20:01.842 "name": "key0", 00:20:01.842 "path": "/tmp/tmp.Tz0RFqROjd" 00:20:01.842 } 00:20:01.842 } 00:20:01.842 ] 00:20:01.842 }, 00:20:01.842 { 00:20:01.842 "subsystem": "iobuf", 00:20:01.842 "config": [ 00:20:01.842 { 00:20:01.842 "method": "iobuf_set_options", 00:20:01.842 "params": { 00:20:01.842 "enable_numa": false, 00:20:01.842 "large_bufsize": 135168, 00:20:01.842 "large_pool_count": 1024, 00:20:01.842 "small_bufsize": 8192, 00:20:01.842 "small_pool_count": 8192 00:20:01.842 } 00:20:01.842 } 00:20:01.842 ] 00:20:01.842 }, 00:20:01.842 { 00:20:01.842 "subsystem": "sock", 00:20:01.842 "config": [ 00:20:01.842 { 00:20:01.842 "method": "sock_set_default_impl", 00:20:01.842 "params": { 00:20:01.842 "impl_name": "posix" 00:20:01.842 } 00:20:01.842 }, 00:20:01.842 { 00:20:01.842 "method": "sock_impl_set_options", 00:20:01.842 "params": { 00:20:01.842 "enable_ktls": false, 00:20:01.842 "enable_placement_id": 0, 00:20:01.842 "enable_quickack": false, 00:20:01.842 "enable_recv_pipe": true, 00:20:01.842 "enable_zerocopy_send_client": false, 00:20:01.842 "enable_zerocopy_send_server": true, 00:20:01.842 "impl_name": "ssl", 00:20:01.842 "recv_buf_size": 4096, 00:20:01.842 "send_buf_size": 4096, 00:20:01.842 "tls_version": 0, 00:20:01.842 "zerocopy_threshold": 0 00:20:01.842 } 00:20:01.842 }, 00:20:01.842 { 00:20:01.842 "method": "sock_impl_set_options", 00:20:01.842 "params": { 00:20:01.842 "enable_ktls": false, 00:20:01.842 "enable_placement_id": 0, 00:20:01.842 "enable_quickack": false, 00:20:01.842 "enable_recv_pipe": true, 00:20:01.842 "enable_zerocopy_send_client": false, 00:20:01.842 "enable_zerocopy_send_server": true, 00:20:01.842 "impl_name": "posix", 00:20:01.842 "recv_buf_size": 2097152, 00:20:01.842 "send_buf_size": 2097152, 00:20:01.842 "tls_version": 0, 00:20:01.842 "zerocopy_threshold": 0 00:20:01.842 } 00:20:01.842 } 00:20:01.842 ] 00:20:01.842 }, 00:20:01.842 { 00:20:01.842 "subsystem": "vmd", 00:20:01.842 "config": [] 00:20:01.842 }, 00:20:01.842 { 00:20:01.842 "subsystem": "accel", 00:20:01.842 "config": [ 00:20:01.842 { 00:20:01.842 "method": "accel_set_options", 00:20:01.842 "params": { 00:20:01.842 "buf_count": 2048, 00:20:01.842 "large_cache_size": 16, 00:20:01.842 "sequence_count": 2048, 00:20:01.842 "small_cache_size": 128, 00:20:01.842 "task_count": 2048 00:20:01.842 } 00:20:01.842 } 00:20:01.842 ] 00:20:01.842 }, 00:20:01.842 { 00:20:01.842 "subsystem": "bdev", 00:20:01.842 "config": [ 00:20:01.842 { 00:20:01.842 "method": "bdev_set_options", 00:20:01.842 "params": { 00:20:01.842 "bdev_auto_examine": true, 00:20:01.842 "bdev_io_cache_size": 256, 00:20:01.842 "bdev_io_pool_size": 65535, 00:20:01.842 "iobuf_large_cache_size": 16, 00:20:01.842 "iobuf_small_cache_size": 128 00:20:01.842 } 00:20:01.842 }, 00:20:01.842 { 00:20:01.842 "method": "bdev_raid_set_options", 00:20:01.842 "params": { 00:20:01.842 "process_max_bandwidth_mb_sec": 0, 00:20:01.842 "process_window_size_kb": 1024 00:20:01.842 } 00:20:01.842 }, 00:20:01.842 { 00:20:01.842 "method": "bdev_iscsi_set_options", 00:20:01.842 "params": { 00:20:01.842 "timeout_sec": 30 00:20:01.842 } 00:20:01.842 }, 00:20:01.842 { 00:20:01.842 "method": "bdev_nvme_set_options", 00:20:01.842 "params": { 00:20:01.842 "action_on_timeout": "none", 00:20:01.842 "allow_accel_sequence": false, 00:20:01.842 "arbitration_burst": 0, 00:20:01.842 "bdev_retry_count": 3, 00:20:01.842 "ctrlr_loss_timeout_sec": 0, 00:20:01.842 "delay_cmd_submit": true, 00:20:01.842 "dhchap_dhgroups": [ 00:20:01.842 "null", 00:20:01.842 "ffdhe2048", 00:20:01.842 "ffdhe3072", 00:20:01.842 "ffdhe4096", 00:20:01.842 "ffdhe6144", 00:20:01.842 "ffdhe8192" 00:20:01.842 ], 00:20:01.842 "dhchap_digests": [ 00:20:01.842 "sha256", 00:20:01.843 "sha384", 00:20:01.843 "sha512" 00:20:01.843 ], 00:20:01.843 "disable_auto_failback": false, 00:20:01.843 "fast_io_fail_timeout_sec": 0, 00:20:01.843 "generate_uuids": false, 00:20:01.843 "high_priority_weight": 0, 00:20:01.843 "io_path_stat": false, 00:20:01.843 "io_queue_requests": 512, 00:20:01.843 "keep_alive_timeout_ms": 10000, 00:20:01.843 "low_priority_weight": 0, 00:20:01.843 "medium_priority_weight": 0, 00:20:01.843 "nvme_adminq_poll_period_us": 10000, 00:20:01.843 "nvme_error_stat": false, 00:20:01.843 "nvme_ioq_poll_period_us": 0, 00:20:01.843 "rdma_cm_event_timeout_ms": 0, 00:20:01.843 "rdma_max_cq_size": 0, 00:20:01.843 "rdma_srq_size": 0, 00:20:01.843 "reconnect_delay_sec": 0, 00:20:01.843 "timeout_admin_us": 0, 00:20:01.843 "timeout_us": 0, 00:20:01.843 "transport_ack_timeout": 0, 00:20:01.843 "transport_retry_count": 4, 00:20:01.843 "transport_tos": 0 00:20:01.843 } 00:20:01.843 }, 00:20:01.843 { 00:20:01.843 "method": "bdev_nvme_attach_controller", 00:20:01.843 "params": { 00:20:01.843 "adrfam": "IPv4", 00:20:01.843 "ctrlr_loss_timeout_sec": 0, 00:20:01.843 "ddgst": false, 00:20:01.843 "fast_io_fail_timeout_sec": 0, 00:20:01.843 "hdgst": false, 00:20:01.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.843 "multipath": "multipath", 00:20:01.843 "name": "TLSTEST", 00:20:01.843 "prchk_guard": false, 00:20:01.843 "prchk_reftag": false, 00:20:01.843 "psk": "key0", 00:20:01.843 "reconnect_delay_sec": 0, 00:20:01.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.843 "traddr": "10.0.0.3", 00:20:01.843 "trsvcid": "4420", 00:20:01.843 "trtype": "TCP" 00:20:01.843 } 00:20:01.843 }, 00:20:01.843 { 00:20:01.843 "method": "bdev_nvme_set_hotplug", 00:20:01.843 "params": { 00:20:01.843 "enable": false, 00:20:01.843 "period_us": 100000 00:20:01.843 } 00:20:01.843 }, 00:20:01.843 { 00:20:01.843 "method": "bdev_wait_for_examine" 00:20:01.843 } 00:20:01.843 ] 00:20:01.843 }, 00:20:01.843 { 00:20:01.843 "subsystem": "nbd", 00:20:01.843 "config": [] 00:20:01.843 } 00:20:01.843 ] 00:20:01.843 }' 00:20:01.843 [2024-12-06 13:13:50.444109] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:01.843 [2024-12-06 13:13:50.444220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84381 ] 00:20:02.100 [2024-12-06 13:13:50.594659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.100 [2024-12-06 13:13:50.663430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.359 [2024-12-06 13:13:50.844787] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.989 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.989 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:02.989 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:02.989 Running I/O for 10 seconds... 00:20:05.296 3968.00 IOPS, 15.50 MiB/s [2024-12-06T13:13:54.872Z] 4002.50 IOPS, 15.63 MiB/s [2024-12-06T13:13:55.806Z] 3996.00 IOPS, 15.61 MiB/s [2024-12-06T13:13:56.740Z] 4007.25 IOPS, 15.65 MiB/s [2024-12-06T13:13:57.675Z] 4013.40 IOPS, 15.68 MiB/s [2024-12-06T13:13:58.607Z] 4018.33 IOPS, 15.70 MiB/s [2024-12-06T13:13:59.652Z] 4027.86 IOPS, 15.73 MiB/s [2024-12-06T13:14:01.028Z] 4034.00 IOPS, 15.76 MiB/s [2024-12-06T13:14:01.962Z] 4031.89 IOPS, 15.75 MiB/s [2024-12-06T13:14:01.962Z] 4028.30 IOPS, 15.74 MiB/s 00:20:13.322 Latency(us) 00:20:13.322 [2024-12-06T13:14:01.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.322 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:13.322 Verification LBA range: start 0x0 length 0x2000 00:20:13.322 TLSTESTn1 : 10.02 4033.73 15.76 0.00 0.00 31671.42 6851.49 23116.33 00:20:13.322 [2024-12-06T13:14:01.962Z] =================================================================================================================== 00:20:13.322 [2024-12-06T13:14:01.962Z] Total : 4033.73 15.76 0.00 0.00 31671.42 6851.49 23116.33 00:20:13.322 { 00:20:13.322 "results": [ 00:20:13.322 { 00:20:13.322 "job": "TLSTESTn1", 00:20:13.322 "core_mask": "0x4", 00:20:13.322 "workload": "verify", 00:20:13.322 "status": "finished", 00:20:13.322 "verify_range": { 00:20:13.322 "start": 0, 00:20:13.322 "length": 8192 00:20:13.322 }, 00:20:13.322 "queue_depth": 128, 00:20:13.322 "io_size": 4096, 00:20:13.322 "runtime": 10.018261, 00:20:13.322 "iops": 4033.733998345621, 00:20:13.322 "mibps": 15.756773431037582, 00:20:13.322 "io_failed": 0, 00:20:13.322 "io_timeout": 0, 00:20:13.322 "avg_latency_us": 31671.42258529968, 00:20:13.322 "min_latency_us": 6851.490909090909, 00:20:13.322 "max_latency_us": 23116.334545454545 00:20:13.322 } 00:20:13.322 ], 00:20:13.322 "core_count": 1 00:20:13.322 } 00:20:13.322 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:13.322 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84381 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84381 ']' 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84381 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84381 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:13.323 killing process with pid 84381 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84381' 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84381 00:20:13.323 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.323 00:20:13.323 Latency(us) 00:20:13.323 [2024-12-06T13:14:01.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.323 [2024-12-06T13:14:01.963Z] =================================================================================================================== 00:20:13.323 [2024-12-06T13:14:01.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84381 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84337 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84337 ']' 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84337 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84337 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:13.323 killing process with pid 84337 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84337' 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84337 00:20:13.323 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84337 00:20:13.581 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:13.581 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.581 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.581 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.581 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84530 00:20:13.581 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84530 00:20:13.581 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84530 ']' 00:20:13.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.581 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.581 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.581 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.581 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:13.581 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.581 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.581 [2024-12-06 13:14:02.185291] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:13.581 [2024-12-06 13:14:02.185391] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.839 [2024-12-06 13:14:02.334883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.839 [2024-12-06 13:14:02.402531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.839 [2024-12-06 13:14:02.402601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.839 [2024-12-06 13:14:02.402616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.839 [2024-12-06 13:14:02.402628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.839 [2024-12-06 13:14:02.402637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.839 [2024-12-06 13:14:02.403101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.096 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.096 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:14.096 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.096 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.096 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.096 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.096 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Tz0RFqROjd 00:20:14.096 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Tz0RFqROjd 00:20:14.096 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.406 [2024-12-06 13:14:02.810369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.406 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:14.665 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:20:14.923 [2024-12-06 13:14:03.410565] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.923 [2024-12-06 13:14:03.410828] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:14.923 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:15.180 malloc0 00:20:15.180 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.438 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Tz0RFqROjd 00:20:16.003 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:16.261 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84632 00:20:16.261 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:16.261 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:16.261 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84632 /var/tmp/bdevperf.sock 00:20:16.261 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84632 ']' 00:20:16.261 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.261 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.261 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.261 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.261 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.261 [2024-12-06 13:14:04.699426] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:16.261 [2024-12-06 13:14:04.699535] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84632 ] 00:20:16.261 [2024-12-06 13:14:04.852531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.518 [2024-12-06 13:14:04.925215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.450 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.450 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:17.450 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Tz0RFqROjd 00:20:17.450 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:17.708 [2024-12-06 13:14:06.329518] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.016 nvme0n1 00:20:18.016 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:18.016 Running I/O for 1 seconds... 00:20:18.998 3862.00 IOPS, 15.09 MiB/s 00:20:18.998 Latency(us) 00:20:18.998 [2024-12-06T13:14:07.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.998 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:18.998 Verification LBA range: start 0x0 length 0x2000 00:20:18.998 nvme0n1 : 1.02 3911.34 15.28 0.00 0.00 32297.16 2487.39 19303.33 00:20:18.998 [2024-12-06T13:14:07.638Z] =================================================================================================================== 00:20:18.998 [2024-12-06T13:14:07.638Z] Total : 3911.34 15.28 0.00 0.00 32297.16 2487.39 19303.33 00:20:18.998 { 00:20:18.998 "results": [ 00:20:18.998 { 00:20:18.998 "job": "nvme0n1", 00:20:18.998 "core_mask": "0x2", 00:20:18.998 "workload": "verify", 00:20:18.998 "status": "finished", 00:20:18.998 "verify_range": { 00:20:18.998 "start": 0, 00:20:18.998 "length": 8192 00:20:18.998 }, 00:20:18.998 "queue_depth": 128, 00:20:18.998 "io_size": 4096, 00:20:18.998 "runtime": 1.020112, 00:20:18.998 "iops": 3911.335225935976, 00:20:18.998 "mibps": 15.278653226312406, 00:20:18.998 "io_failed": 0, 00:20:18.998 "io_timeout": 0, 00:20:18.998 "avg_latency_us": 32297.156914103438, 00:20:18.998 "min_latency_us": 2487.389090909091, 00:20:18.998 "max_latency_us": 19303.33090909091 00:20:18.998 } 00:20:18.998 ], 00:20:18.998 "core_count": 1 00:20:18.998 } 00:20:18.998 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84632 00:20:18.998 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84632 ']' 00:20:18.998 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84632 00:20:18.998 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:18.998 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.998 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84632 00:20:19.268 killing process with pid 84632 00:20:19.268 Received shutdown signal, test time was about 1.000000 seconds 00:20:19.268 00:20:19.268 Latency(us) 00:20:19.268 [2024-12-06T13:14:07.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.268 [2024-12-06T13:14:07.908Z] =================================================================================================================== 00:20:19.268 [2024-12-06T13:14:07.908Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84632' 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84632 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84632 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84530 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84530 ']' 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84530 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84530 00:20:19.268 killing process with pid 84530 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84530' 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84530 00:20:19.268 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84530 00:20:19.533 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:19.533 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.533 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.533 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.533 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84707 00:20:19.533 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:19.533 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84707 00:20:19.533 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84707 ']' 00:20:19.533 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.533 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.533 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.533 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.533 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.533 [2024-12-06 13:14:08.141945] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:19.533 [2024-12-06 13:14:08.142259] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.792 [2024-12-06 13:14:08.286164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.792 [2024-12-06 13:14:08.347646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.792 [2024-12-06 13:14:08.347900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.792 [2024-12-06 13:14:08.348075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.792 [2024-12-06 13:14:08.348207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.792 [2024-12-06 13:14:08.348244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.792 [2024-12-06 13:14:08.348760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.051 [2024-12-06 13:14:08.528186] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.051 malloc0 00:20:20.051 [2024-12-06 13:14:08.559397] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:20.051 [2024-12-06 13:14:08.559647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:20.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84744 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84744 /var/tmp/bdevperf.sock 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84744 ']' 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.051 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.051 [2024-12-06 13:14:08.648728] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:20.051 [2024-12-06 13:14:08.649087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84744 ] 00:20:20.310 [2024-12-06 13:14:08.797961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.310 [2024-12-06 13:14:08.863301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.242 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.242 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:21.242 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Tz0RFqROjd 00:20:21.503 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:21.765 [2024-12-06 13:14:10.275511] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.765 nvme0n1 00:20:21.765 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:22.023 Running I/O for 1 seconds... 00:20:22.958 3840.00 IOPS, 15.00 MiB/s 00:20:22.958 Latency(us) 00:20:22.958 [2024-12-06T13:14:11.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.958 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:22.958 Verification LBA range: start 0x0 length 0x2000 00:20:22.958 nvme0n1 : 1.03 3844.68 15.02 0.00 0.00 32914.39 10247.45 23592.96 00:20:22.958 [2024-12-06T13:14:11.598Z] =================================================================================================================== 00:20:22.958 [2024-12-06T13:14:11.598Z] Total : 3844.68 15.02 0.00 0.00 32914.39 10247.45 23592.96 00:20:22.958 { 00:20:22.958 "results": [ 00:20:22.958 { 00:20:22.958 "job": "nvme0n1", 00:20:22.958 "core_mask": "0x2", 00:20:22.958 "workload": "verify", 00:20:22.958 "status": "finished", 00:20:22.958 "verify_range": { 00:20:22.958 "start": 0, 00:20:22.958 "length": 8192 00:20:22.958 }, 00:20:22.958 "queue_depth": 128, 00:20:22.958 "io_size": 4096, 00:20:22.958 "runtime": 1.032076, 00:20:22.958 "iops": 3844.678105100787, 00:20:22.958 "mibps": 15.01827384804995, 00:20:22.958 "io_failed": 0, 00:20:22.958 "io_timeout": 0, 00:20:22.958 "avg_latency_us": 32914.39296187683, 00:20:22.958 "min_latency_us": 10247.447272727273, 00:20:22.958 "max_latency_us": 23592.96 00:20:22.958 } 00:20:22.958 ], 00:20:22.958 "core_count": 1 00:20:22.958 } 00:20:22.958 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:22.958 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.958 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.218 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.218 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:23.218 "subsystems": [ 00:20:23.218 { 00:20:23.218 "subsystem": "keyring", 00:20:23.218 "config": [ 00:20:23.218 { 00:20:23.218 "method": "keyring_file_add_key", 00:20:23.218 "params": { 00:20:23.218 "name": "key0", 00:20:23.218 "path": "/tmp/tmp.Tz0RFqROjd" 00:20:23.218 } 00:20:23.218 } 00:20:23.218 ] 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "subsystem": "iobuf", 00:20:23.218 "config": [ 00:20:23.218 { 00:20:23.218 "method": "iobuf_set_options", 00:20:23.218 "params": { 00:20:23.218 "enable_numa": false, 00:20:23.218 "large_bufsize": 135168, 00:20:23.218 "large_pool_count": 1024, 00:20:23.218 "small_bufsize": 8192, 00:20:23.218 "small_pool_count": 8192 00:20:23.218 } 00:20:23.218 } 00:20:23.218 ] 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "subsystem": "sock", 00:20:23.218 "config": [ 00:20:23.218 { 00:20:23.218 "method": "sock_set_default_impl", 00:20:23.218 "params": { 00:20:23.218 "impl_name": "posix" 00:20:23.218 } 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "method": "sock_impl_set_options", 00:20:23.218 "params": { 00:20:23.218 "enable_ktls": false, 00:20:23.218 "enable_placement_id": 0, 00:20:23.218 "enable_quickack": false, 00:20:23.218 "enable_recv_pipe": true, 00:20:23.218 "enable_zerocopy_send_client": false, 00:20:23.218 "enable_zerocopy_send_server": true, 00:20:23.218 "impl_name": "ssl", 00:20:23.218 "recv_buf_size": 4096, 00:20:23.218 "send_buf_size": 4096, 00:20:23.218 "tls_version": 0, 00:20:23.218 "zerocopy_threshold": 0 00:20:23.218 } 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "method": "sock_impl_set_options", 00:20:23.218 "params": { 00:20:23.218 "enable_ktls": false, 00:20:23.218 "enable_placement_id": 0, 00:20:23.218 "enable_quickack": false, 00:20:23.218 "enable_recv_pipe": true, 00:20:23.218 "enable_zerocopy_send_client": false, 00:20:23.218 "enable_zerocopy_send_server": true, 00:20:23.218 "impl_name": "posix", 00:20:23.218 "recv_buf_size": 2097152, 00:20:23.218 "send_buf_size": 2097152, 00:20:23.218 "tls_version": 0, 00:20:23.218 "zerocopy_threshold": 0 00:20:23.218 } 00:20:23.218 } 00:20:23.218 ] 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "subsystem": "vmd", 00:20:23.218 "config": [] 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "subsystem": "accel", 00:20:23.218 "config": [ 00:20:23.218 { 00:20:23.218 "method": "accel_set_options", 00:20:23.218 "params": { 00:20:23.218 "buf_count": 2048, 00:20:23.218 "large_cache_size": 16, 00:20:23.218 "sequence_count": 2048, 00:20:23.218 "small_cache_size": 128, 00:20:23.218 "task_count": 2048 00:20:23.218 } 00:20:23.218 } 00:20:23.218 ] 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "subsystem": "bdev", 00:20:23.218 "config": [ 00:20:23.218 { 00:20:23.218 "method": "bdev_set_options", 00:20:23.218 "params": { 00:20:23.218 "bdev_auto_examine": true, 00:20:23.218 "bdev_io_cache_size": 256, 00:20:23.218 "bdev_io_pool_size": 65535, 00:20:23.218 "iobuf_large_cache_size": 16, 00:20:23.218 "iobuf_small_cache_size": 128 00:20:23.218 } 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "method": "bdev_raid_set_options", 00:20:23.218 "params": { 00:20:23.218 "process_max_bandwidth_mb_sec": 0, 00:20:23.218 "process_window_size_kb": 1024 00:20:23.218 } 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "method": "bdev_iscsi_set_options", 00:20:23.218 "params": { 00:20:23.218 "timeout_sec": 30 00:20:23.218 } 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "method": "bdev_nvme_set_options", 00:20:23.218 "params": { 00:20:23.218 "action_on_timeout": "none", 00:20:23.218 "allow_accel_sequence": false, 00:20:23.218 "arbitration_burst": 0, 00:20:23.218 "bdev_retry_count": 3, 00:20:23.218 "ctrlr_loss_timeout_sec": 0, 00:20:23.218 "delay_cmd_submit": true, 00:20:23.218 "dhchap_dhgroups": [ 00:20:23.218 "null", 00:20:23.218 "ffdhe2048", 00:20:23.218 "ffdhe3072", 00:20:23.218 "ffdhe4096", 00:20:23.218 "ffdhe6144", 00:20:23.218 "ffdhe8192" 00:20:23.218 ], 00:20:23.218 "dhchap_digests": [ 00:20:23.218 "sha256", 00:20:23.218 "sha384", 00:20:23.218 "sha512" 00:20:23.218 ], 00:20:23.218 "disable_auto_failback": false, 00:20:23.218 "fast_io_fail_timeout_sec": 0, 00:20:23.218 "generate_uuids": false, 00:20:23.218 "high_priority_weight": 0, 00:20:23.218 "io_path_stat": false, 00:20:23.218 "io_queue_requests": 0, 00:20:23.218 "keep_alive_timeout_ms": 10000, 00:20:23.218 "low_priority_weight": 0, 00:20:23.218 "medium_priority_weight": 0, 00:20:23.218 "nvme_adminq_poll_period_us": 10000, 00:20:23.218 "nvme_error_stat": false, 00:20:23.218 "nvme_ioq_poll_period_us": 0, 00:20:23.218 "rdma_cm_event_timeout_ms": 0, 00:20:23.218 "rdma_max_cq_size": 0, 00:20:23.218 "rdma_srq_size": 0, 00:20:23.218 "reconnect_delay_sec": 0, 00:20:23.218 "timeout_admin_us": 0, 00:20:23.218 "timeout_us": 0, 00:20:23.218 "transport_ack_timeout": 0, 00:20:23.218 "transport_retry_count": 4, 00:20:23.218 "transport_tos": 0 00:20:23.218 } 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "method": "bdev_nvme_set_hotplug", 00:20:23.218 "params": { 00:20:23.218 "enable": false, 00:20:23.218 "period_us": 100000 00:20:23.218 } 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "method": "bdev_malloc_create", 00:20:23.218 "params": { 00:20:23.218 "block_size": 4096, 00:20:23.218 "dif_is_head_of_md": false, 00:20:23.218 "dif_pi_format": 0, 00:20:23.218 "dif_type": 0, 00:20:23.218 "md_size": 0, 00:20:23.218 "name": "malloc0", 00:20:23.218 "num_blocks": 8192, 00:20:23.218 "optimal_io_boundary": 0, 00:20:23.218 "physical_block_size": 4096, 00:20:23.218 "uuid": "22bd5bda-b368-4e20-8d41-9a4659e3062e" 00:20:23.218 } 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "method": "bdev_wait_for_examine" 00:20:23.218 } 00:20:23.218 ] 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "subsystem": "nbd", 00:20:23.218 "config": [] 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "subsystem": "scheduler", 00:20:23.218 "config": [ 00:20:23.218 { 00:20:23.218 "method": "framework_set_scheduler", 00:20:23.218 "params": { 00:20:23.218 "name": "static" 00:20:23.218 } 00:20:23.218 } 00:20:23.218 ] 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "subsystem": "nvmf", 00:20:23.218 "config": [ 00:20:23.218 { 00:20:23.218 "method": "nvmf_set_config", 00:20:23.218 "params": { 00:20:23.218 "admin_cmd_passthru": { 00:20:23.218 "identify_ctrlr": false 00:20:23.218 }, 00:20:23.218 "dhchap_dhgroups": [ 00:20:23.218 "null", 00:20:23.218 "ffdhe2048", 00:20:23.218 "ffdhe3072", 00:20:23.218 "ffdhe4096", 00:20:23.218 "ffdhe6144", 00:20:23.218 "ffdhe8192" 00:20:23.218 ], 00:20:23.218 "dhchap_digests": [ 00:20:23.218 "sha256", 00:20:23.218 "sha384", 00:20:23.218 "sha512" 00:20:23.218 ], 00:20:23.218 "discovery_filter": "match_any" 00:20:23.218 } 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "method": "nvmf_set_max_subsystems", 00:20:23.218 "params": { 00:20:23.218 "max_subsystems": 1024 00:20:23.218 } 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "method": "nvmf_set_crdt", 00:20:23.218 "params": { 00:20:23.218 "crdt1": 0, 00:20:23.218 "crdt2": 0, 00:20:23.218 "crdt3": 0 00:20:23.218 } 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "method": "nvmf_create_transport", 00:20:23.218 "params": { 00:20:23.218 "abort_timeout_sec": 1, 00:20:23.218 "ack_timeout": 0, 00:20:23.218 "buf_cache_size": 4294967295, 00:20:23.218 "c2h_success": false, 00:20:23.218 "data_wr_pool_size": 0, 00:20:23.218 "dif_insert_or_strip": false, 00:20:23.218 "in_capsule_data_size": 4096, 00:20:23.218 "io_unit_size": 131072, 00:20:23.218 "max_aq_depth": 128, 00:20:23.218 "max_io_qpairs_per_ctrlr": 127, 00:20:23.218 "max_io_size": 131072, 00:20:23.218 "max_queue_depth": 128, 00:20:23.218 "num_shared_buffers": 511, 00:20:23.218 "sock_priority": 0, 00:20:23.218 "trtype": "TCP", 00:20:23.218 "zcopy": false 00:20:23.218 } 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "method": "nvmf_create_subsystem", 00:20:23.218 "params": { 00:20:23.218 "allow_any_host": false, 00:20:23.218 "ana_reporting": false, 00:20:23.218 "max_cntlid": 65519, 00:20:23.218 "max_namespaces": 32, 00:20:23.218 "min_cntlid": 1, 00:20:23.218 "model_number": "SPDK bdev Controller", 00:20:23.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.218 "serial_number": "00000000000000000000" 00:20:23.218 } 00:20:23.218 }, 00:20:23.218 { 00:20:23.218 "method": "nvmf_subsystem_add_host", 00:20:23.219 "params": { 00:20:23.219 "host": "nqn.2016-06.io.spdk:host1", 00:20:23.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.219 "psk": "key0" 00:20:23.219 } 00:20:23.219 }, 00:20:23.219 { 00:20:23.219 "method": "nvmf_subsystem_add_ns", 00:20:23.219 "params": { 00:20:23.219 "namespace": { 00:20:23.219 "bdev_name": "malloc0", 00:20:23.219 "nguid": "22BD5BDAB3684E208D419A4659E3062E", 00:20:23.219 "no_auto_visible": false, 00:20:23.219 "nsid": 1, 00:20:23.219 "uuid": "22bd5bda-b368-4e20-8d41-9a4659e3062e" 00:20:23.219 }, 00:20:23.219 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:23.219 } 00:20:23.219 }, 00:20:23.219 { 00:20:23.219 "method": "nvmf_subsystem_add_listener", 00:20:23.219 "params": { 00:20:23.219 "listen_address": { 00:20:23.219 "adrfam": "IPv4", 00:20:23.219 "traddr": "10.0.0.3", 00:20:23.219 "trsvcid": "4420", 00:20:23.219 "trtype": "TCP" 00:20:23.219 }, 00:20:23.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.219 "secure_channel": false, 00:20:23.219 "sock_impl": "ssl" 00:20:23.219 } 00:20:23.219 } 00:20:23.219 ] 00:20:23.219 } 00:20:23.219 ] 00:20:23.219 }' 00:20:23.219 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:23.478 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:23.478 "subsystems": [ 00:20:23.478 { 00:20:23.478 "subsystem": "keyring", 00:20:23.478 "config": [ 00:20:23.478 { 00:20:23.478 "method": "keyring_file_add_key", 00:20:23.478 "params": { 00:20:23.478 "name": "key0", 00:20:23.478 "path": "/tmp/tmp.Tz0RFqROjd" 00:20:23.478 } 00:20:23.478 } 00:20:23.478 ] 00:20:23.478 }, 00:20:23.478 { 00:20:23.478 "subsystem": "iobuf", 00:20:23.478 "config": [ 00:20:23.478 { 00:20:23.478 "method": "iobuf_set_options", 00:20:23.478 "params": { 00:20:23.478 "enable_numa": false, 00:20:23.478 "large_bufsize": 135168, 00:20:23.478 "large_pool_count": 1024, 00:20:23.478 "small_bufsize": 8192, 00:20:23.478 "small_pool_count": 8192 00:20:23.478 } 00:20:23.478 } 00:20:23.478 ] 00:20:23.478 }, 00:20:23.478 { 00:20:23.478 "subsystem": "sock", 00:20:23.478 "config": [ 00:20:23.478 { 00:20:23.478 "method": "sock_set_default_impl", 00:20:23.478 "params": { 00:20:23.478 "impl_name": "posix" 00:20:23.478 } 00:20:23.478 }, 00:20:23.478 { 00:20:23.478 "method": "sock_impl_set_options", 00:20:23.478 "params": { 00:20:23.478 "enable_ktls": false, 00:20:23.478 "enable_placement_id": 0, 00:20:23.478 "enable_quickack": false, 00:20:23.478 "enable_recv_pipe": true, 00:20:23.478 "enable_zerocopy_send_client": false, 00:20:23.478 "enable_zerocopy_send_server": true, 00:20:23.478 "impl_name": "ssl", 00:20:23.478 "recv_buf_size": 4096, 00:20:23.478 "send_buf_size": 4096, 00:20:23.478 "tls_version": 0, 00:20:23.478 "zerocopy_threshold": 0 00:20:23.478 } 00:20:23.478 }, 00:20:23.478 { 00:20:23.478 "method": "sock_impl_set_options", 00:20:23.478 "params": { 00:20:23.478 "enable_ktls": false, 00:20:23.478 "enable_placement_id": 0, 00:20:23.478 "enable_quickack": false, 00:20:23.478 "enable_recv_pipe": true, 00:20:23.478 "enable_zerocopy_send_client": false, 00:20:23.478 "enable_zerocopy_send_server": true, 00:20:23.478 "impl_name": "posix", 00:20:23.478 "recv_buf_size": 2097152, 00:20:23.478 "send_buf_size": 2097152, 00:20:23.478 "tls_version": 0, 00:20:23.478 "zerocopy_threshold": 0 00:20:23.478 } 00:20:23.478 } 00:20:23.478 ] 00:20:23.478 }, 00:20:23.478 { 00:20:23.478 "subsystem": "vmd", 00:20:23.478 "config": [] 00:20:23.478 }, 00:20:23.478 { 00:20:23.478 "subsystem": "accel", 00:20:23.478 "config": [ 00:20:23.478 { 00:20:23.478 "method": "accel_set_options", 00:20:23.478 "params": { 00:20:23.478 "buf_count": 2048, 00:20:23.478 "large_cache_size": 16, 00:20:23.478 "sequence_count": 2048, 00:20:23.478 "small_cache_size": 128, 00:20:23.478 "task_count": 2048 00:20:23.478 } 00:20:23.478 } 00:20:23.479 ] 00:20:23.479 }, 00:20:23.479 { 00:20:23.479 "subsystem": "bdev", 00:20:23.479 "config": [ 00:20:23.479 { 00:20:23.479 "method": "bdev_set_options", 00:20:23.479 "params": { 00:20:23.479 "bdev_auto_examine": true, 00:20:23.479 "bdev_io_cache_size": 256, 00:20:23.479 "bdev_io_pool_size": 65535, 00:20:23.479 "iobuf_large_cache_size": 16, 00:20:23.479 "iobuf_small_cache_size": 128 00:20:23.479 } 00:20:23.479 }, 00:20:23.479 { 00:20:23.479 "method": "bdev_raid_set_options", 00:20:23.479 "params": { 00:20:23.479 "process_max_bandwidth_mb_sec": 0, 00:20:23.479 "process_window_size_kb": 1024 00:20:23.479 } 00:20:23.479 }, 00:20:23.479 { 00:20:23.479 "method": "bdev_iscsi_set_options", 00:20:23.479 "params": { 00:20:23.479 "timeout_sec": 30 00:20:23.479 } 00:20:23.479 }, 00:20:23.479 { 00:20:23.479 "method": "bdev_nvme_set_options", 00:20:23.479 "params": { 00:20:23.479 "action_on_timeout": "none", 00:20:23.479 "allow_accel_sequence": false, 00:20:23.479 "arbitration_burst": 0, 00:20:23.479 "bdev_retry_count": 3, 00:20:23.479 "ctrlr_loss_timeout_sec": 0, 00:20:23.479 "delay_cmd_submit": true, 00:20:23.479 "dhchap_dhgroups": [ 00:20:23.479 "null", 00:20:23.479 "ffdhe2048", 00:20:23.479 "ffdhe3072", 00:20:23.479 "ffdhe4096", 00:20:23.479 "ffdhe6144", 00:20:23.479 "ffdhe8192" 00:20:23.479 ], 00:20:23.479 "dhchap_digests": [ 00:20:23.479 "sha256", 00:20:23.479 "sha384", 00:20:23.479 "sha512" 00:20:23.479 ], 00:20:23.479 "disable_auto_failback": false, 00:20:23.479 "fast_io_fail_timeout_sec": 0, 00:20:23.479 "generate_uuids": false, 00:20:23.479 "high_priority_weight": 0, 00:20:23.479 "io_path_stat": false, 00:20:23.479 "io_queue_requests": 512, 00:20:23.479 "keep_alive_timeout_ms": 10000, 00:20:23.479 "low_priority_weight": 0, 00:20:23.479 "medium_priority_weight": 0, 00:20:23.479 "nvme_adminq_poll_period_us": 10000, 00:20:23.479 "nvme_error_stat": false, 00:20:23.479 "nvme_ioq_poll_period_us": 0, 00:20:23.479 "rdma_cm_event_timeout_ms": 0, 00:20:23.479 "rdma_max_cq_size": 0, 00:20:23.479 "rdma_srq_size": 0, 00:20:23.479 "reconnect_delay_sec": 0, 00:20:23.479 "timeout_admin_us": 0, 00:20:23.479 "timeout_us": 0, 00:20:23.479 "transport_ack_timeout": 0, 00:20:23.479 "transport_retry_count": 4, 00:20:23.479 "transport_tos": 0 00:20:23.479 } 00:20:23.479 }, 00:20:23.479 { 00:20:23.479 "method": "bdev_nvme_attach_controller", 00:20:23.479 "params": { 00:20:23.479 "adrfam": "IPv4", 00:20:23.479 "ctrlr_loss_timeout_sec": 0, 00:20:23.479 "ddgst": false, 00:20:23.479 "fast_io_fail_timeout_sec": 0, 00:20:23.479 "hdgst": false, 00:20:23.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.479 "multipath": "multipath", 00:20:23.479 "name": "nvme0", 00:20:23.479 "prchk_guard": false, 00:20:23.479 "prchk_reftag": false, 00:20:23.479 "psk": "key0", 00:20:23.479 "reconnect_delay_sec": 0, 00:20:23.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.479 "traddr": "10.0.0.3", 00:20:23.479 "trsvcid": "4420", 00:20:23.479 "trtype": "TCP" 00:20:23.479 } 00:20:23.479 }, 00:20:23.479 { 00:20:23.479 "method": "bdev_nvme_set_hotplug", 00:20:23.479 "params": { 00:20:23.479 "enable": false, 00:20:23.479 "period_us": 100000 00:20:23.479 } 00:20:23.479 }, 00:20:23.479 { 00:20:23.479 "method": "bdev_enable_histogram", 00:20:23.479 "params": { 00:20:23.479 "enable": true, 00:20:23.479 "name": "nvme0n1" 00:20:23.479 } 00:20:23.479 }, 00:20:23.479 { 00:20:23.479 "method": "bdev_wait_for_examine" 00:20:23.479 } 00:20:23.479 ] 00:20:23.479 }, 00:20:23.479 { 00:20:23.479 "subsystem": "nbd", 00:20:23.479 "config": [] 00:20:23.479 } 00:20:23.479 ] 00:20:23.479 }' 00:20:23.479 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84744 00:20:23.479 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84744 ']' 00:20:23.479 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84744 00:20:23.479 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:23.479 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.479 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84744 00:20:23.479 killing process with pid 84744 00:20:23.479 Received shutdown signal, test time was about 1.000000 seconds 00:20:23.479 00:20:23.479 Latency(us) 00:20:23.479 [2024-12-06T13:14:12.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.479 [2024-12-06T13:14:12.119Z] =================================================================================================================== 00:20:23.479 [2024-12-06T13:14:12.119Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.479 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:23.479 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:23.479 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84744' 00:20:23.479 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84744 00:20:23.479 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84744 00:20:23.737 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84707 00:20:23.737 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84707 ']' 00:20:23.737 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84707 00:20:23.737 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:23.737 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.737 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84707 00:20:23.737 killing process with pid 84707 00:20:23.737 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:23.737 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:23.737 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84707' 00:20:23.737 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84707 00:20:23.737 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84707 00:20:23.996 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:23.996 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:23.996 "subsystems": [ 00:20:23.996 { 00:20:23.996 "subsystem": "keyring", 00:20:23.996 "config": [ 00:20:23.996 { 00:20:23.996 "method": "keyring_file_add_key", 00:20:23.996 "params": { 00:20:23.996 "name": "key0", 00:20:23.996 "path": "/tmp/tmp.Tz0RFqROjd" 00:20:23.996 } 00:20:23.996 } 00:20:23.996 ] 00:20:23.996 }, 00:20:23.996 { 00:20:23.996 "subsystem": "iobuf", 00:20:23.996 "config": [ 00:20:23.996 { 00:20:23.996 "method": "iobuf_set_options", 00:20:23.996 "params": { 00:20:23.996 "enable_numa": false, 00:20:23.996 "large_bufsize": 135168, 00:20:23.996 "large_pool_count": 1024, 00:20:23.996 "small_bufsize": 8192, 00:20:23.996 "small_pool_count": 8192 00:20:23.996 } 00:20:23.996 } 00:20:23.996 ] 00:20:23.996 }, 00:20:23.996 { 00:20:23.996 "subsystem": "sock", 00:20:23.996 "config": [ 00:20:23.996 { 00:20:23.996 "method": "sock_set_default_impl", 00:20:23.996 "params": { 00:20:23.996 "impl_name": "posix" 00:20:23.996 } 00:20:23.996 }, 00:20:23.996 { 00:20:23.996 "method": "sock_impl_set_options", 00:20:23.996 "params": { 00:20:23.996 "enable_ktls": false, 00:20:23.996 "enable_placement_id": 0, 00:20:23.996 "enable_quickack": false, 00:20:23.996 "enable_recv_pipe": true, 00:20:23.996 "enable_zerocopy_send_client": false, 00:20:23.996 "enable_zerocopy_send_server": true, 00:20:23.996 "impl_name": "ssl", 00:20:23.996 "recv_buf_size": 4096, 00:20:23.996 "send_buf_size": 4096, 00:20:23.996 "tls_version": 0, 00:20:23.996 "zerocopy_threshold": 0 00:20:23.996 } 00:20:23.996 }, 00:20:23.996 { 00:20:23.996 "method": "sock_impl_set_options", 00:20:23.996 "params": { 00:20:23.997 "enable_ktls": false, 00:20:23.997 "enable_placement_id": 0, 00:20:23.997 "enable_quickack": false, 00:20:23.997 "enable_recv_pipe": true, 00:20:23.997 "enable_zerocopy_send_client": false, 00:20:23.997 "enable_zerocopy_send_server": true, 00:20:23.997 "impl_name": "posix", 00:20:23.997 "recv_buf_size": 2097152, 00:20:23.997 "send_buf_size": 2097152, 00:20:23.997 "tls_version": 0, 00:20:23.997 "zerocopy_threshold": 0 00:20:23.997 } 00:20:23.997 } 00:20:23.997 ] 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "subsystem": "vmd", 00:20:23.997 "config": [] 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "subsystem": "accel", 00:20:23.997 "config": [ 00:20:23.997 { 00:20:23.997 "method": "accel_set_options", 00:20:23.997 "params": { 00:20:23.997 "buf_count": 2048, 00:20:23.997 "large_cache_size": 16, 00:20:23.997 "sequence_count": 2048, 00:20:23.997 "small_cache_size": 128, 00:20:23.997 "task_count": 2048 00:20:23.997 } 00:20:23.997 } 00:20:23.997 ] 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "subsystem": "bdev", 00:20:23.997 "config": [ 00:20:23.997 { 00:20:23.997 "method": "bdev_set_options", 00:20:23.997 "params": { 00:20:23.997 "bdev_auto_examine": true, 00:20:23.997 "bdev_io_cache_size": 256, 00:20:23.997 "bdev_io_pool_size": 65535, 00:20:23.997 "iobuf_large_cache_size": 16, 00:20:23.997 "iobuf_small_cache_size": 128 00:20:23.997 } 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "method": "bdev_raid_set_options", 00:20:23.997 "params": { 00:20:23.997 "process_max_bandwidth_mb_sec": 0, 00:20:23.997 "process_window_size_kb": 1024 00:20:23.997 } 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "method": "bdev_iscsi_set_options", 00:20:23.997 "params": { 00:20:23.997 "timeout_sec": 30 00:20:23.997 } 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "method": "bdev_nvme_set_options", 00:20:23.997 "params": { 00:20:23.997 "action_on_timeout": "none", 00:20:23.997 "allow_accel_sequence": false, 00:20:23.997 "arbitration_burst": 0, 00:20:23.997 "bdev_retry_count": 3, 00:20:23.997 "ctrlr_loss_timeout_sec": 0, 00:20:23.997 "delay_cmd_submit": true, 00:20:23.997 "dhchap_dhgroups": [ 00:20:23.997 "null", 00:20:23.997 "ffdhe2048", 00:20:23.997 "ffdhe3072", 00:20:23.997 "ffdhe4096", 00:20:23.997 "ffdhe6144", 00:20:23.997 "ffdhe8192" 00:20:23.997 ], 00:20:23.997 "dhchap_digests": [ 00:20:23.997 "sha256", 00:20:23.997 "sha384", 00:20:23.997 "sha512" 00:20:23.997 ], 00:20:23.997 "disable_auto_failback": false, 00:20:23.997 "fast_io_fail_timeout_sec": 0, 00:20:23.997 "generate_uuids": false, 00:20:23.997 "high_priority_weight": 0, 00:20:23.997 "io_path_stat": false, 00:20:23.997 "io_queue_requests": 0, 00:20:23.997 "keep_alive_timeout_ms": 10000, 00:20:23.997 "low_priority_weight": 0, 00:20:23.997 "medium_priority_weight": 0, 00:20:23.997 "nvme_adminq_poll_period_us": 10000, 00:20:23.997 "nvme_error_stat": false, 00:20:23.997 "nvme_ioq_poll_period_us": 0, 00:20:23.997 "rdma_cm_event_timeout_ms": 0, 00:20:23.997 "rdma_max_cq_size": 0, 00:20:23.997 "rdma_srq_size": 0, 00:20:23.997 "reconnect_delay_sec": 0, 00:20:23.997 "timeout_admin_us": 0, 00:20:23.997 "timeout_us": 0, 00:20:23.997 "transport_ack_timeout": 0, 00:20:23.997 "transport_retry_count": 4, 00:20:23.997 "transport_tos": 0 00:20:23.997 } 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "method": "bdev_nvme_set_hotplug", 00:20:23.997 "params": { 00:20:23.997 "enable": false, 00:20:23.997 "period_us": 100000 00:20:23.997 } 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "method": "bdev_malloc_create", 00:20:23.997 "params": { 00:20:23.997 "block_size": 4096, 00:20:23.997 "dif_is_head_of_md": false, 00:20:23.997 "dif_pi_format": 0, 00:20:23.997 "dif_type": 0, 00:20:23.997 "md_size": 0, 00:20:23.997 "name": "malloc0", 00:20:23.997 "num_blocks": 8192, 00:20:23.997 "optimal_io_boundary": 0, 00:20:23.997 "physical_block_size": 4096, 00:20:23.997 "uuid": "22bd5bda-b368-4e20-8d41-9a4659e3062e" 00:20:23.997 } 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "method": "bdev_wait_for_examine" 00:20:23.997 } 00:20:23.997 ] 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "subsystem": "nbd", 00:20:23.997 "config": [] 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "subsystem": "scheduler", 00:20:23.997 "config": [ 00:20:23.997 { 00:20:23.997 "method": "framework_set_scheduler", 00:20:23.997 "params": { 00:20:23.997 "name": "static" 00:20:23.997 } 00:20:23.997 } 00:20:23.997 ] 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "subsystem": "nvmf", 00:20:23.997 "config": [ 00:20:23.997 { 00:20:23.997 "method": "nvmf_set_config", 00:20:23.997 "params": { 00:20:23.997 "admin_cmd_passthru": { 00:20:23.997 "identify_ctrlr": false 00:20:23.997 }, 00:20:23.997 "dhchap_dhgroups": [ 00:20:23.997 "null", 00:20:23.997 "ffdhe2048", 00:20:23.997 "ffdhe3072", 00:20:23.997 "ffdhe4096", 00:20:23.997 "ffdhe6144", 00:20:23.997 "ffdhe8192" 00:20:23.997 ], 00:20:23.997 "dhchap_digests": [ 00:20:23.997 "sha256", 00:20:23.997 "sha384", 00:20:23.997 "sha512" 00:20:23.997 ], 00:20:23.997 "discovery_filter": "match_any" 00:20:23.997 } 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "method": "nvmf_set_max_subsystems", 00:20:23.997 "params": { 00:20:23.997 "max_subsystems": 1024 00:20:23.997 } 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "method": "nvmf_set_crdt", 00:20:23.997 "params": { 00:20:23.997 "crdt1": 0, 00:20:23.997 "crdt2": 0, 00:20:23.997 "crdt3": 0 00:20:23.997 } 00:20:23.997 }, 00:20:23.997 { 00:20:23.997 "method": "nvmf_create_transport", 00:20:23.997 "params": { 00:20:23.997 "abort_timeout_sec": 1, 00:20:23.997 "ack_timeout": 0, 00:20:23.997 "buf_cache_size": 4294967295, 00:20:23.997 "c2h_success": false, 00:20:23.997 "data_wr_pool_size": 0, 00:20:23.997 "dif_insert_or_strip": false, 00:20:23.997 "in_capsule_data_size": 4096, 00:20:23.997 "io_unit_size": 131072, 00:20:23.997 "max_aq_depth": 128, 00:20:23.997 "max_io_qpairs_per_ctrlr": 127, 00:20:23.998 "max_io_size": 131072, 00:20:23.998 "max_queue_depth": 128, 00:20:23.998 "num_shared_buffers": 511, 00:20:23.998 "sock_priority": 0, 00:20:23.998 "trtype": "TCP", 00:20:23.998 "zcopy": false 00:20:23.998 } 00:20:23.998 }, 00:20:23.998 { 00:20:23.998 "method": "nvmf_create_subsystem", 00:20:23.998 "params": { 00:20:23.998 "allow_any_host": false, 00:20:23.998 "ana_reporting": false, 00:20:23.998 "max_cntlid": 65519, 00:20:23.998 "max_namespaces": 32, 00:20:23.998 "min_cntlid": 1, 00:20:23.998 "model_number": "SPDK bdev Controller", 00:20:23.998 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.998 "serial_number": "00000000000000000000" 00:20:23.998 } 00:20:23.998 }, 00:20:23.998 { 00:20:23.998 "method": "nvmf_subsystem_add_host", 00:20:23.998 "params": { 00:20:23.998 "host": "nqn.2016-06.io.spdk:host1", 00:20:23.998 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.998 "psk": "key0" 00:20:23.998 } 00:20:23.998 }, 00:20:23.998 { 00:20:23.998 "method": "nvmf_subsystem_add_ns", 00:20:23.998 "params": { 00:20:23.998 "namespace": { 00:20:23.998 "bdev_name": "malloc0", 00:20:23.998 "nguid": "22BD5BDAB3684E208D419A4659E3062E", 00:20:23.998 "no_auto_visible": false, 00:20:23.998 "nsid": 1, 00:20:23.998 "uuid": "22bd5bda-b368-4e20-8d41-9a4659e3062e" 00:20:23.998 }, 00:20:23.998 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:23.998 } 00:20:23.998 }, 00:20:23.998 { 00:20:23.998 "method": "nvmf_subsystem_add_listener", 00:20:23.998 "params": { 00:20:23.998 "listen_address": { 00:20:23.998 "adrfam": "IPv4", 00:20:23.998 "traddr": "10.0.0.3", 00:20:23.998 "trsvcid": "4420", 00:20:23.998 "trtype": "TCP" 00:20:23.998 }, 00:20:23.998 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.998 "secure_channel": false, 00:20:23.998 "sock_impl": "ssl" 00:20:23.998 } 00:20:23.998 } 00:20:23.998 ] 00:20:23.998 } 00:20:23.998 ] 00:20:23.998 }' 00:20:23.998 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:23.998 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:23.998 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.998 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84840 00:20:23.998 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84840 00:20:23.998 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84840 ']' 00:20:23.998 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.998 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:23.998 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.998 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.998 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.998 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.998 [2024-12-06 13:14:12.532200] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:23.998 [2024-12-06 13:14:12.532294] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.256 [2024-12-06 13:14:12.678697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.256 [2024-12-06 13:14:12.736735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.256 [2024-12-06 13:14:12.736810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.256 [2024-12-06 13:14:12.736823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.256 [2024-12-06 13:14:12.736831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.256 [2024-12-06 13:14:12.736838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.256 [2024-12-06 13:14:12.737282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.514 [2024-12-06 13:14:12.983197] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.514 [2024-12-06 13:14:13.015120] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.514 [2024-12-06 13:14:13.015371] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84881 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84881 /var/tmp/bdevperf.sock 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84881 ']' 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.082 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:25.082 "subsystems": [ 00:20:25.082 { 00:20:25.082 "subsystem": "keyring", 00:20:25.082 "config": [ 00:20:25.082 { 00:20:25.082 "method": "keyring_file_add_key", 00:20:25.082 "params": { 00:20:25.082 "name": "key0", 00:20:25.082 "path": "/tmp/tmp.Tz0RFqROjd" 00:20:25.082 } 00:20:25.082 } 00:20:25.082 ] 00:20:25.082 }, 00:20:25.082 { 00:20:25.082 "subsystem": "iobuf", 00:20:25.082 "config": [ 00:20:25.082 { 00:20:25.082 "method": "iobuf_set_options", 00:20:25.082 "params": { 00:20:25.082 "enable_numa": false, 00:20:25.082 "large_bufsize": 135168, 00:20:25.082 "large_pool_count": 1024, 00:20:25.082 "small_bufsize": 8192, 00:20:25.082 "small_pool_count": 8192 00:20:25.082 } 00:20:25.082 } 00:20:25.082 ] 00:20:25.082 }, 00:20:25.082 { 00:20:25.082 "subsystem": "sock", 00:20:25.082 "config": [ 00:20:25.082 { 00:20:25.082 "method": "sock_set_default_impl", 00:20:25.082 "params": { 00:20:25.082 "impl_name": "posix" 00:20:25.082 } 00:20:25.082 }, 00:20:25.082 { 00:20:25.082 "method": "sock_impl_set_options", 00:20:25.082 "params": { 00:20:25.082 "enable_ktls": false, 00:20:25.082 "enable_placement_id": 0, 00:20:25.082 "enable_quickack": false, 00:20:25.082 "enable_recv_pipe": true, 00:20:25.082 "enable_zerocopy_send_client": false, 00:20:25.082 "enable_zerocopy_send_server": true, 00:20:25.082 "impl_name": "ssl", 00:20:25.082 "recv_buf_size": 4096, 00:20:25.082 "send_buf_size": 4096, 00:20:25.082 "tls_version": 0, 00:20:25.082 "zerocopy_threshold": 0 00:20:25.082 } 00:20:25.082 }, 00:20:25.082 { 00:20:25.082 "method": "sock_impl_set_options", 00:20:25.082 "params": { 00:20:25.082 "enable_ktls": false, 00:20:25.082 "enable_placement_id": 0, 00:20:25.082 "enable_quickack": false, 00:20:25.082 "enable_recv_pipe": true, 00:20:25.082 "enable_zerocopy_send_client": false, 00:20:25.082 "enable_zerocopy_send_server": true, 00:20:25.082 "impl_name": "posix", 00:20:25.082 "recv_buf_size": 2097152, 00:20:25.082 "send_buf_size": 2097152, 00:20:25.082 "tls_version": 0, 00:20:25.082 "zerocopy_threshold": 0 00:20:25.082 } 00:20:25.082 } 00:20:25.082 ] 00:20:25.082 }, 00:20:25.082 { 00:20:25.082 "subsystem": "vmd", 00:20:25.082 "config": [] 00:20:25.082 }, 00:20:25.082 { 00:20:25.082 "subsystem": "accel", 00:20:25.082 "config": [ 00:20:25.082 { 00:20:25.082 "method": "accel_set_options", 00:20:25.082 "params": { 00:20:25.082 "buf_count": 2048, 00:20:25.082 "large_cache_size": 16, 00:20:25.082 "sequence_count": 2048, 00:20:25.082 "small_cache_size": 128, 00:20:25.082 "task_count": 2048 00:20:25.082 } 00:20:25.082 } 00:20:25.082 ] 00:20:25.082 }, 00:20:25.082 { 00:20:25.082 "subsystem": "bdev", 00:20:25.082 "config": [ 00:20:25.082 { 00:20:25.082 "method": "bdev_set_options", 00:20:25.082 "params": { 00:20:25.082 "bdev_auto_examine": true, 00:20:25.082 "bdev_io_cache_size": 256, 00:20:25.082 "bdev_io_pool_size": 65535, 00:20:25.082 "iobuf_large_cache_size": 16, 00:20:25.082 "iobuf_small_cache_size": 128 00:20:25.082 } 00:20:25.082 }, 00:20:25.082 { 00:20:25.082 "method": "bdev_raid_set_options", 00:20:25.082 "params": { 00:20:25.082 "process_max_bandwidth_mb_sec": 0, 00:20:25.082 "process_window_size_kb": 1024 00:20:25.082 } 00:20:25.082 }, 00:20:25.082 { 00:20:25.082 "method": "bdev_iscsi_set_options", 00:20:25.082 "params": { 00:20:25.082 "timeout_sec": 30 00:20:25.082 } 00:20:25.082 }, 00:20:25.082 { 00:20:25.082 "method": "bdev_nvme_set_options", 00:20:25.082 "params": { 00:20:25.082 "action_on_timeout": "none", 00:20:25.082 "allow_accel_sequence": false, 00:20:25.082 "arbitration_burst": 0, 00:20:25.082 "bdev_retry_count": 3, 00:20:25.082 "ctrlr_loss_timeout_sec": 0, 00:20:25.082 "delay_cmd_submit": true, 00:20:25.082 "dhchap_dhgroups": [ 00:20:25.082 "null", 00:20:25.082 "ffdhe2048", 00:20:25.082 "ffdhe3072", 00:20:25.082 "ffdhe4096", 00:20:25.082 "ffdhe6144", 00:20:25.082 "ffdhe8192" 00:20:25.082 ], 00:20:25.082 "dhchap_digests": [ 00:20:25.082 "sha256", 00:20:25.082 "sha384", 00:20:25.082 "sha512" 00:20:25.082 ], 00:20:25.082 "disable_auto_failback": false, 00:20:25.082 "fast_io_fail_timeout_sec": 0, 00:20:25.082 "generate_uuids": false, 00:20:25.082 "high_priority_weight": 0, 00:20:25.082 "io_path_stat": false, 00:20:25.082 "io_queue_requests": 512, 00:20:25.082 "keep_alive_timeout_ms": 10000, 00:20:25.082 "low_priority_weight": 0, 00:20:25.082 "medium_priority_weight": 0, 00:20:25.082 "nvme_adminq_poll_period_us": 10000, 00:20:25.082 "nvme_error_stat": false, 00:20:25.082 "nvme_ioq_poll_period_us": 0, 00:20:25.082 "rdma_cm_event_timeout_ms": 0, 00:20:25.082 "rdma_max_cq_size": 0, 00:20:25.082 "rdma_srq_size": 0, 00:20:25.082 "reconnect_delay_sec": 0, 00:20:25.082 "timeout_admin_us": 0, 00:20:25.082 "timeout_us": 0, 00:20:25.082 "transport_ack_timeout": 0, 00:20:25.082 "transport_retry_count": 4, 00:20:25.082 "transport_tos": 0 00:20:25.082 } 00:20:25.082 }, 00:20:25.082 { 00:20:25.082 "method": "bdev_nvme_attach_controller", 00:20:25.082 "params": { 00:20:25.082 "adrfam": "IPv4", 00:20:25.083 "ctrlr_loss_timeout_sec": 0, 00:20:25.083 "ddgst": false, 00:20:25.083 "fast_io_fail_timeout_sec": 0, 00:20:25.083 "hdgst": false, 00:20:25.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.083 "multipath": "multipath", 00:20:25.083 "name": "nvme0", 00:20:25.083 "prchk_guard": false, 00:20:25.083 "prchk_reftag": false, 00:20:25.083 "psk": "key0", 00:20:25.083 "reconnect_delay_sec": 0, 00:20:25.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.083 "traddr": "10.0.0.3", 00:20:25.083 "trsvcid": "4420", 00:20:25.083 "trtype": "TCP" 00:20:25.083 } 00:20:25.083 }, 00:20:25.083 { 00:20:25.083 "method": "bdev_nvme_set_hotplug", 00:20:25.083 "params": { 00:20:25.083 "enable": false, 00:20:25.083 "period_us": 100000 00:20:25.083 } 00:20:25.083 }, 00:20:25.083 { 00:20:25.083 "method": "bdev_enable_histogram", 00:20:25.083 "params": { 00:20:25.083 "enable": true, 00:20:25.083 "name": "nvme0n1" 00:20:25.083 } 00:20:25.083 }, 00:20:25.083 { 00:20:25.083 "method": "bdev_wait_for_examine" 00:20:25.083 } 00:20:25.083 ] 00:20:25.083 }, 00:20:25.083 { 00:20:25.083 "subsystem": "nbd", 00:20:25.083 "config": [] 00:20:25.083 } 00:20:25.083 ] 00:20:25.083 }' 00:20:25.083 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.083 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.083 [2024-12-06 13:14:13.680766] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:25.083 [2024-12-06 13:14:13.681047] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84881 ] 00:20:25.340 [2024-12-06 13:14:13.842613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.340 [2024-12-06 13:14:13.910337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.598 [2024-12-06 13:14:14.093105] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.163 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.163 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:26.163 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:26.163 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:26.421 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.421 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:26.695 Running I/O for 1 seconds... 00:20:27.646 3939.00 IOPS, 15.39 MiB/s 00:20:27.646 Latency(us) 00:20:27.646 [2024-12-06T13:14:16.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.646 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:27.646 Verification LBA range: start 0x0 length 0x2000 00:20:27.646 nvme0n1 : 1.03 3955.61 15.45 0.00 0.00 31940.09 6911.07 19303.33 00:20:27.646 [2024-12-06T13:14:16.286Z] =================================================================================================================== 00:20:27.646 [2024-12-06T13:14:16.286Z] Total : 3955.61 15.45 0.00 0.00 31940.09 6911.07 19303.33 00:20:27.646 { 00:20:27.646 "results": [ 00:20:27.646 { 00:20:27.646 "job": "nvme0n1", 00:20:27.646 "core_mask": "0x2", 00:20:27.646 "workload": "verify", 00:20:27.646 "status": "finished", 00:20:27.646 "verify_range": { 00:20:27.646 "start": 0, 00:20:27.646 "length": 8192 00:20:27.646 }, 00:20:27.646 "queue_depth": 128, 00:20:27.646 "io_size": 4096, 00:20:27.646 "runtime": 1.02816, 00:20:27.646 "iops": 3955.6100217864923, 00:20:27.646 "mibps": 15.451601647603486, 00:20:27.646 "io_failed": 0, 00:20:27.646 "io_timeout": 0, 00:20:27.646 "avg_latency_us": 31940.09106734917, 00:20:27.646 "min_latency_us": 6911.069090909091, 00:20:27.646 "max_latency_us": 19303.33090909091 00:20:27.646 } 00:20:27.646 ], 00:20:27.646 "core_count": 1 00:20:27.646 } 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:27.646 nvmf_trace.0 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84881 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84881 ']' 00:20:27.646 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84881 00:20:27.904 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:27.904 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.904 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84881 00:20:27.904 killing process with pid 84881 00:20:27.904 Received shutdown signal, test time was about 1.000000 seconds 00:20:27.904 00:20:27.904 Latency(us) 00:20:27.904 [2024-12-06T13:14:16.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.904 [2024-12-06T13:14:16.544Z] =================================================================================================================== 00:20:27.904 [2024-12-06T13:14:16.544Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.904 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:27.904 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:27.904 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84881' 00:20:27.904 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84881 00:20:27.904 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84881 00:20:27.904 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:27.904 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:27.904 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:28.162 rmmod nvme_tcp 00:20:28.162 rmmod nvme_fabrics 00:20:28.162 rmmod nvme_keyring 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 84840 ']' 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 84840 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84840 ']' 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84840 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84840 00:20:28.162 killing process with pid 84840 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84840' 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84840 00:20:28.162 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84840 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:28.420 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:28.420 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:28.420 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.aP8DBLve03 /tmp/tmp.IKKz2Pi7aC /tmp/tmp.Tz0RFqROjd 00:20:28.678 ************************************ 00:20:28.678 END TEST nvmf_tls 00:20:28.678 ************************************ 00:20:28.678 00:20:28.678 real 1m28.762s 00:20:28.678 user 2m26.632s 00:20:28.678 sys 0m28.027s 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.678 ************************************ 00:20:28.678 START TEST nvmf_fips 00:20:28.678 ************************************ 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:28.678 * Looking for test storage... 00:20:28.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:20:28.678 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:28.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.937 --rc genhtml_branch_coverage=1 00:20:28.937 --rc genhtml_function_coverage=1 00:20:28.937 --rc genhtml_legend=1 00:20:28.937 --rc geninfo_all_blocks=1 00:20:28.937 --rc geninfo_unexecuted_blocks=1 00:20:28.937 00:20:28.937 ' 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:28.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.937 --rc genhtml_branch_coverage=1 00:20:28.937 --rc genhtml_function_coverage=1 00:20:28.937 --rc genhtml_legend=1 00:20:28.937 --rc geninfo_all_blocks=1 00:20:28.937 --rc geninfo_unexecuted_blocks=1 00:20:28.937 00:20:28.937 ' 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:28.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.937 --rc genhtml_branch_coverage=1 00:20:28.937 --rc genhtml_function_coverage=1 00:20:28.937 --rc genhtml_legend=1 00:20:28.937 --rc geninfo_all_blocks=1 00:20:28.937 --rc geninfo_unexecuted_blocks=1 00:20:28.937 00:20:28.937 ' 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:28.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.937 --rc genhtml_branch_coverage=1 00:20:28.937 --rc genhtml_function_coverage=1 00:20:28.937 --rc genhtml_legend=1 00:20:28.937 --rc geninfo_all_blocks=1 00:20:28.937 --rc geninfo_unexecuted_blocks=1 00:20:28.937 00:20:28.937 ' 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.937 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.937 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:28.938 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:29.199 Error setting digest 00:20:29.199 40A29E21D67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:29.199 40A29E21D67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:29.199 Cannot find device "nvmf_init_br" 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:29.199 Cannot find device "nvmf_init_br2" 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:29.199 Cannot find device "nvmf_tgt_br" 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:29.199 Cannot find device "nvmf_tgt_br2" 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:29.199 Cannot find device "nvmf_init_br" 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:29.199 Cannot find device "nvmf_init_br2" 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:20:29.199 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:29.199 Cannot find device "nvmf_tgt_br" 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:29.200 Cannot find device "nvmf_tgt_br2" 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:29.200 Cannot find device "nvmf_br" 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:29.200 Cannot find device "nvmf_init_if" 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:29.200 Cannot find device "nvmf_init_if2" 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:29.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:29.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:29.200 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:29.463 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:29.464 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:29.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:20:29.464 00:20:29.464 --- 10.0.0.3 ping statistics --- 00:20:29.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.464 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:29.464 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:29.464 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:20:29.464 00:20:29.464 --- 10.0.0.4 ping statistics --- 00:20:29.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.464 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:29.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:20:29.464 00:20:29.464 --- 10.0.0.1 ping statistics --- 00:20:29.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.464 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:29.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:20:29.464 00:20:29.464 --- 10.0.0.2 ping statistics --- 00:20:29.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.464 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:29.464 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:29.464 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:29.464 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:29.464 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.464 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:29.464 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=85226 00:20:29.464 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 85226 00:20:29.464 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:29.464 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85226 ']' 00:20:29.464 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.464 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.464 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.464 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.464 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:29.722 [2024-12-06 13:14:18.112508] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:29.722 [2024-12-06 13:14:18.112606] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.722 [2024-12-06 13:14:18.266174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.722 [2024-12-06 13:14:18.335661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.722 [2024-12-06 13:14:18.335731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.722 [2024-12-06 13:14:18.335746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.722 [2024-12-06 13:14:18.335756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.722 [2024-12-06 13:14:18.335766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.722 [2024-12-06 13:14:18.336259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.S9i 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.S9i 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.S9i 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.S9i 00:20:30.655 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:30.913 [2024-12-06 13:14:19.440304] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.913 [2024-12-06 13:14:19.456236] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:30.913 [2024-12-06 13:14:19.456511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:30.913 malloc0 00:20:30.913 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.913 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85282 00:20:30.913 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.913 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85282 /var/tmp/bdevperf.sock 00:20:30.913 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85282 ']' 00:20:30.913 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.913 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.913 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.913 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.913 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:31.170 [2024-12-06 13:14:19.614044] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:31.170 [2024-12-06 13:14:19.614153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85282 ] 00:20:31.170 [2024-12-06 13:14:19.769623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.427 [2024-12-06 13:14:19.840617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.992 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.992 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:31.992 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.S9i 00:20:32.250 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:32.815 [2024-12-06 13:14:21.207836] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:32.815 TLSTESTn1 00:20:32.815 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:32.815 Running I/O for 10 seconds... 00:20:35.137 3983.00 IOPS, 15.56 MiB/s [2024-12-06T13:14:24.711Z] 4056.00 IOPS, 15.84 MiB/s [2024-12-06T13:14:25.652Z] 4076.67 IOPS, 15.92 MiB/s [2024-12-06T13:14:26.585Z] 4089.50 IOPS, 15.97 MiB/s [2024-12-06T13:14:27.521Z] 4102.00 IOPS, 16.02 MiB/s [2024-12-06T13:14:28.455Z] 4103.50 IOPS, 16.03 MiB/s [2024-12-06T13:14:29.840Z] 4106.86 IOPS, 16.04 MiB/s [2024-12-06T13:14:30.776Z] 4103.62 IOPS, 16.03 MiB/s [2024-12-06T13:14:31.711Z] 4104.00 IOPS, 16.03 MiB/s [2024-12-06T13:14:31.711Z] 4106.70 IOPS, 16.04 MiB/s 00:20:43.071 Latency(us) 00:20:43.071 [2024-12-06T13:14:31.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.071 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:43.071 Verification LBA range: start 0x0 length 0x2000 00:20:43.071 TLSTESTn1 : 10.02 4112.71 16.07 0.00 0.00 31066.19 5749.29 25976.09 00:20:43.071 [2024-12-06T13:14:31.711Z] =================================================================================================================== 00:20:43.071 [2024-12-06T13:14:31.711Z] Total : 4112.71 16.07 0.00 0.00 31066.19 5749.29 25976.09 00:20:43.071 { 00:20:43.071 "results": [ 00:20:43.071 { 00:20:43.071 "job": "TLSTESTn1", 00:20:43.071 "core_mask": "0x4", 00:20:43.071 "workload": "verify", 00:20:43.071 "status": "finished", 00:20:43.071 "verify_range": { 00:20:43.071 "start": 0, 00:20:43.071 "length": 8192 00:20:43.071 }, 00:20:43.071 "queue_depth": 128, 00:20:43.071 "io_size": 4096, 00:20:43.071 "runtime": 10.016519, 00:20:43.071 "iops": 4112.7062205942, 00:20:43.071 "mibps": 16.065258674196095, 00:20:43.071 "io_failed": 0, 00:20:43.071 "io_timeout": 0, 00:20:43.071 "avg_latency_us": 31066.192876893707, 00:20:43.071 "min_latency_us": 5749.294545454545, 00:20:43.071 "max_latency_us": 25976.087272727273 00:20:43.071 } 00:20:43.071 ], 00:20:43.071 "core_count": 1 00:20:43.071 } 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:43.071 nvmf_trace.0 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85282 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85282 ']' 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85282 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85282 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:43.071 killing process with pid 85282 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85282' 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85282 00:20:43.071 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85282 00:20:43.071 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.071 00:20:43.071 Latency(us) 00:20:43.071 [2024-12-06T13:14:31.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.071 [2024-12-06T13:14:31.712Z] =================================================================================================================== 00:20:43.072 [2024-12-06T13:14:31.712Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.330 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:43.330 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:43.330 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:43.330 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:43.330 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:43.330 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:43.330 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:43.330 rmmod nvme_tcp 00:20:43.330 rmmod nvme_fabrics 00:20:43.330 rmmod nvme_keyring 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 85226 ']' 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 85226 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85226 ']' 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85226 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85226 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:43.331 killing process with pid 85226 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85226' 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85226 00:20:43.331 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85226 00:20:43.590 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:43.590 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:43.590 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:43.590 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:43.590 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:43.590 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:43.590 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:43.590 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:43.590 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:43.590 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:43.590 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:43.590 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.S9i 00:20:43.849 00:20:43.849 real 0m15.237s 00:20:43.849 user 0m21.450s 00:20:43.849 sys 0m5.779s 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:43.849 ************************************ 00:20:43.849 END TEST nvmf_fips 00:20:43.849 ************************************ 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.849 13:14:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:44.142 ************************************ 00:20:44.142 START TEST nvmf_control_msg_list 00:20:44.142 ************************************ 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:44.142 * Looking for test storage... 00:20:44.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:44.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.142 --rc genhtml_branch_coverage=1 00:20:44.142 --rc genhtml_function_coverage=1 00:20:44.142 --rc genhtml_legend=1 00:20:44.142 --rc geninfo_all_blocks=1 00:20:44.142 --rc geninfo_unexecuted_blocks=1 00:20:44.142 00:20:44.142 ' 00:20:44.142 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:44.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.143 --rc genhtml_branch_coverage=1 00:20:44.143 --rc genhtml_function_coverage=1 00:20:44.143 --rc genhtml_legend=1 00:20:44.143 --rc geninfo_all_blocks=1 00:20:44.143 --rc geninfo_unexecuted_blocks=1 00:20:44.143 00:20:44.143 ' 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:44.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.143 --rc genhtml_branch_coverage=1 00:20:44.143 --rc genhtml_function_coverage=1 00:20:44.143 --rc genhtml_legend=1 00:20:44.143 --rc geninfo_all_blocks=1 00:20:44.143 --rc geninfo_unexecuted_blocks=1 00:20:44.143 00:20:44.143 ' 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:44.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.143 --rc genhtml_branch_coverage=1 00:20:44.143 --rc genhtml_function_coverage=1 00:20:44.143 --rc genhtml_legend=1 00:20:44.143 --rc geninfo_all_blocks=1 00:20:44.143 --rc geninfo_unexecuted_blocks=1 00:20:44.143 00:20:44.143 ' 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:44.143 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:44.143 Cannot find device "nvmf_init_br" 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:20:44.143 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:44.143 Cannot find device "nvmf_init_br2" 00:20:44.144 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:20:44.144 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:44.144 Cannot find device "nvmf_tgt_br" 00:20:44.144 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:20:44.144 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:44.144 Cannot find device "nvmf_tgt_br2" 00:20:44.144 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:20:44.144 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:44.427 Cannot find device "nvmf_init_br" 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:44.427 Cannot find device "nvmf_init_br2" 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:44.427 Cannot find device "nvmf_tgt_br" 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:44.427 Cannot find device "nvmf_tgt_br2" 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:44.427 Cannot find device "nvmf_br" 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:44.427 Cannot find device "nvmf_init_if" 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:44.427 Cannot find device "nvmf_init_if2" 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:44.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:44.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:44.427 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:44.427 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:44.427 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:44.427 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:44.686 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:44.686 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:44.686 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:44.686 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:44.686 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:44.686 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:44.686 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:44.686 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:44.686 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:20:44.686 00:20:44.686 --- 10.0.0.3 ping statistics --- 00:20:44.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.686 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:20:44.686 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:44.686 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:44.686 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:20:44.686 00:20:44.686 --- 10.0.0.4 ping statistics --- 00:20:44.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.686 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:20:44.686 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:44.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:44.687 00:20:44.687 --- 10.0.0.1 ping statistics --- 00:20:44.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.687 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:44.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:20:44.687 00:20:44.687 --- 10.0.0.2 ping statistics --- 00:20:44.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.687 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=85694 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 85694 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 85694 ']' 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.687 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:44.687 [2024-12-06 13:14:33.180125] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:44.687 [2024-12-06 13:14:33.180217] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.945 [2024-12-06 13:14:33.328685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.945 [2024-12-06 13:14:33.392056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.945 [2024-12-06 13:14:33.392113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.945 [2024-12-06 13:14:33.392128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.945 [2024-12-06 13:14:33.392139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.945 [2024-12-06 13:14:33.392148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.945 [2024-12-06 13:14:33.392626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.882 [2024-12-06 13:14:34.225229] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.882 Malloc0 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.882 [2024-12-06 13:14:34.281636] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85744 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85745 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85746 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:45.882 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85744 00:20:45.882 [2024-12-06 13:14:34.472203] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:45.882 [2024-12-06 13:14:34.472492] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:45.882 [2024-12-06 13:14:34.482266] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:47.260 Initializing NVMe Controllers 00:20:47.260 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:47.260 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:47.260 Initialization complete. Launching workers. 00:20:47.260 ======================================================== 00:20:47.260 Latency(us) 00:20:47.260 Device Information : IOPS MiB/s Average min max 00:20:47.260 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3313.00 12.94 301.57 150.97 1070.11 00:20:47.260 ======================================================== 00:20:47.260 Total : 3313.00 12.94 301.57 150.97 1070.11 00:20:47.260 00:20:47.260 Initializing NVMe Controllers 00:20:47.260 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:47.260 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:47.260 Initialization complete. Launching workers. 00:20:47.260 ======================================================== 00:20:47.261 Latency(us) 00:20:47.261 Device Information : IOPS MiB/s Average min max 00:20:47.261 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3290.00 12.85 303.67 172.10 1056.93 00:20:47.261 ======================================================== 00:20:47.261 Total : 3290.00 12.85 303.67 172.10 1056.93 00:20:47.261 00:20:47.261 Initializing NVMe Controllers 00:20:47.261 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:47.261 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:47.261 Initialization complete. Launching workers. 00:20:47.261 ======================================================== 00:20:47.261 Latency(us) 00:20:47.261 Device Information : IOPS MiB/s Average min max 00:20:47.261 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3303.00 12.90 302.33 131.32 2073.12 00:20:47.261 ======================================================== 00:20:47.261 Total : 3303.00 12.90 302.33 131.32 2073.12 00:20:47.261 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85745 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85746 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:47.261 rmmod nvme_tcp 00:20:47.261 rmmod nvme_fabrics 00:20:47.261 rmmod nvme_keyring 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 85694 ']' 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 85694 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 85694 ']' 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 85694 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85694 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:47.261 killing process with pid 85694 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85694' 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 85694 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 85694 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:47.261 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:47.519 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:47.519 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:47.519 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:47.519 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:47.519 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:47.519 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:47.519 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:47.519 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:47.519 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:47.519 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:47.519 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:47.519 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:47.519 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:47.519 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:47.519 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:47.519 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:47.519 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.519 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.519 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.519 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:20:47.519 00:20:47.519 real 0m3.660s 00:20:47.519 user 0m5.665s 00:20:47.519 sys 0m1.391s 00:20:47.519 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.519 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.519 ************************************ 00:20:47.519 END TEST nvmf_control_msg_list 00:20:47.519 ************************************ 00:20:47.831 13:14:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:47.831 13:14:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:47.831 13:14:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.831 13:14:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:47.831 ************************************ 00:20:47.831 START TEST nvmf_wait_for_buf 00:20:47.831 ************************************ 00:20:47.831 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:47.831 * Looking for test storage... 00:20:47.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:47.831 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:47.831 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:47.831 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:47.831 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:47.831 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:47.831 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:47.831 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:47.831 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:47.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.832 --rc genhtml_branch_coverage=1 00:20:47.832 --rc genhtml_function_coverage=1 00:20:47.832 --rc genhtml_legend=1 00:20:47.832 --rc geninfo_all_blocks=1 00:20:47.832 --rc geninfo_unexecuted_blocks=1 00:20:47.832 00:20:47.832 ' 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:47.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.832 --rc genhtml_branch_coverage=1 00:20:47.832 --rc genhtml_function_coverage=1 00:20:47.832 --rc genhtml_legend=1 00:20:47.832 --rc geninfo_all_blocks=1 00:20:47.832 --rc geninfo_unexecuted_blocks=1 00:20:47.832 00:20:47.832 ' 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:47.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.832 --rc genhtml_branch_coverage=1 00:20:47.832 --rc genhtml_function_coverage=1 00:20:47.832 --rc genhtml_legend=1 00:20:47.832 --rc geninfo_all_blocks=1 00:20:47.832 --rc geninfo_unexecuted_blocks=1 00:20:47.832 00:20:47.832 ' 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:47.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.832 --rc genhtml_branch_coverage=1 00:20:47.832 --rc genhtml_function_coverage=1 00:20:47.832 --rc genhtml_legend=1 00:20:47.832 --rc geninfo_all_blocks=1 00:20:47.832 --rc geninfo_unexecuted_blocks=1 00:20:47.832 00:20:47.832 ' 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.832 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:47.833 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:47.833 Cannot find device "nvmf_init_br" 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:20:47.833 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:48.091 Cannot find device "nvmf_init_br2" 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:48.091 Cannot find device "nvmf_tgt_br" 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:48.091 Cannot find device "nvmf_tgt_br2" 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:48.091 Cannot find device "nvmf_init_br" 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:48.091 Cannot find device "nvmf_init_br2" 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:48.091 Cannot find device "nvmf_tgt_br" 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:48.091 Cannot find device "nvmf_tgt_br2" 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:48.091 Cannot find device "nvmf_br" 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:48.091 Cannot find device "nvmf_init_if" 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:48.091 Cannot find device "nvmf_init_if2" 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:48.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:48.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:48.091 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:48.092 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:48.092 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:48.092 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:48.092 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:48.092 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:48.092 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:48.092 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:48.350 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:48.350 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:48.350 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:20:48.350 00:20:48.350 --- 10.0.0.3 ping statistics --- 00:20:48.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.351 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:48.351 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:48.351 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:20:48.351 00:20:48.351 --- 10.0.0.4 ping statistics --- 00:20:48.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.351 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:48.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:48.351 00:20:48.351 --- 10.0.0.1 ping statistics --- 00:20:48.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.351 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:48.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:20:48.351 00:20:48.351 --- 10.0.0.2 ping statistics --- 00:20:48.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.351 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=85978 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 85978 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 85978 ']' 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.351 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.351 [2024-12-06 13:14:36.948030] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:48.351 [2024-12-06 13:14:36.948138] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.609 [2024-12-06 13:14:37.101346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.609 [2024-12-06 13:14:37.169953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.609 [2024-12-06 13:14:37.170021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.609 [2024-12-06 13:14:37.170036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.609 [2024-12-06 13:14:37.170046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.609 [2024-12-06 13:14:37.170056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.609 [2024-12-06 13:14:37.170562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.609 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.609 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:48.609 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.609 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.609 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.867 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.868 Malloc0 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.868 [2024-12-06 13:14:37.382908] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.868 [2024-12-06 13:14:37.407049] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.868 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:49.125 [2024-12-06 13:14:37.598585] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:50.498 Initializing NVMe Controllers 00:20:50.498 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:50.498 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:50.498 Initialization complete. Launching workers. 00:20:50.498 ======================================================== 00:20:50.498 Latency(us) 00:20:50.498 Device Information : IOPS MiB/s Average min max 00:20:50.498 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.47 15.93 32610.31 8013.71 68735.21 00:20:50.498 ======================================================== 00:20:50.498 Total : 127.47 15.93 32610.31 8013.71 68735.21 00:20:50.498 00:20:50.498 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:50.498 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:50.498 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.498 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:50.498 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.498 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:20:50.498 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:20:50.498 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:50.498 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:50.498 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:50.498 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:50.498 rmmod nvme_tcp 00:20:50.498 rmmod nvme_fabrics 00:20:50.498 rmmod nvme_keyring 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 85978 ']' 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 85978 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 85978 ']' 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 85978 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.498 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85978 00:20:50.756 killing process with pid 85978 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85978' 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 85978 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 85978 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:50.756 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:20:51.015 00:20:51.015 real 0m3.397s 00:20:51.015 user 0m2.723s 00:20:51.015 sys 0m0.779s 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.015 ************************************ 00:20:51.015 END TEST nvmf_wait_for_buf 00:20:51.015 ************************************ 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:51.015 ************************************ 00:20:51.015 START TEST nvmf_nsid 00:20:51.015 ************************************ 00:20:51.015 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:51.275 * Looking for test storage... 00:20:51.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.275 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:51.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.276 --rc genhtml_branch_coverage=1 00:20:51.276 --rc genhtml_function_coverage=1 00:20:51.276 --rc genhtml_legend=1 00:20:51.276 --rc geninfo_all_blocks=1 00:20:51.276 --rc geninfo_unexecuted_blocks=1 00:20:51.276 00:20:51.276 ' 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:51.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.276 --rc genhtml_branch_coverage=1 00:20:51.276 --rc genhtml_function_coverage=1 00:20:51.276 --rc genhtml_legend=1 00:20:51.276 --rc geninfo_all_blocks=1 00:20:51.276 --rc geninfo_unexecuted_blocks=1 00:20:51.276 00:20:51.276 ' 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:51.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.276 --rc genhtml_branch_coverage=1 00:20:51.276 --rc genhtml_function_coverage=1 00:20:51.276 --rc genhtml_legend=1 00:20:51.276 --rc geninfo_all_blocks=1 00:20:51.276 --rc geninfo_unexecuted_blocks=1 00:20:51.276 00:20:51.276 ' 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:51.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.276 --rc genhtml_branch_coverage=1 00:20:51.276 --rc genhtml_function_coverage=1 00:20:51.276 --rc genhtml_legend=1 00:20:51.276 --rc geninfo_all_blocks=1 00:20:51.276 --rc geninfo_unexecuted_blocks=1 00:20:51.276 00:20:51.276 ' 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.276 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:51.276 Cannot find device "nvmf_init_br" 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:51.276 Cannot find device "nvmf_init_br2" 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:51.276 Cannot find device "nvmf_tgt_br" 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:20:51.276 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:51.536 Cannot find device "nvmf_tgt_br2" 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:51.536 Cannot find device "nvmf_init_br" 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:51.536 Cannot find device "nvmf_init_br2" 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:51.536 Cannot find device "nvmf_tgt_br" 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:51.536 Cannot find device "nvmf_tgt_br2" 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:51.536 Cannot find device "nvmf_br" 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:51.536 Cannot find device "nvmf_init_if" 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:20:51.536 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:51.536 Cannot find device "nvmf_init_if2" 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:51.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:51.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:51.536 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:51.796 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:51.796 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:20:51.796 00:20:51.796 --- 10.0.0.3 ping statistics --- 00:20:51.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.796 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:51.796 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:51.796 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:20:51.796 00:20:51.796 --- 10.0.0.4 ping statistics --- 00:20:51.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.796 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:51.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:51.796 00:20:51.796 --- 10.0.0.1 ping statistics --- 00:20:51.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.796 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:51.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:20:51.796 00:20:51.796 --- 10.0.0.2 ping statistics --- 00:20:51.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.796 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=86259 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 86259 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 86259 ']' 00:20:51.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.796 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:51.796 [2024-12-06 13:14:40.376716] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:51.796 [2024-12-06 13:14:40.377128] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.069 [2024-12-06 13:14:40.532260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.069 [2024-12-06 13:14:40.600371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.069 [2024-12-06 13:14:40.600441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.069 [2024-12-06 13:14:40.600474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.069 [2024-12-06 13:14:40.600488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.069 [2024-12-06 13:14:40.600497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.069 [2024-12-06 13:14:40.600964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=86284 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=85a9c24d-917a-4651-a46c-e9e859387d9c 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=e71c3e1f-7d00-471e-a81e-10fb48d2c1e5 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=2a95b049-1728-47ca-8e0d-db19468a26c7 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:52.329 null0 00:20:52.329 null1 00:20:52.329 null2 00:20:52.329 [2024-12-06 13:14:40.844317] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.329 [2024-12-06 13:14:40.868014] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:52.329 [2024-12-06 13:14:40.868276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86284 ] 00:20:52.329 [2024-12-06 13:14:40.868464] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:52.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 86284 /var/tmp/tgt2.sock 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 86284 ']' 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.329 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:52.588 [2024-12-06 13:14:41.021583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.588 [2024-12-06 13:14:41.087599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.847 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.847 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:52.847 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:53.414 [2024-12-06 13:14:41.781013] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.414 [2024-12-06 13:14:41.797181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:53.414 nvme0n1 nvme0n2 00:20:53.414 nvme1n1 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:53.414 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 85a9c24d-917a-4651-a46c-e9e859387d9c 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=85a9c24d917a4651a46ce9e859387d9c 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 85A9C24D917A4651A46CE9E859387D9C 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 85A9C24D917A4651A46CE9E859387D9C == \8\5\A\9\C\2\4\D\9\1\7\A\4\6\5\1\A\4\6\C\E\9\E\8\5\9\3\8\7\D\9\C ]] 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid e71c3e1f-7d00-471e-a81e-10fb48d2c1e5 00:20:54.790 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e71c3e1f7d00471ea81e10fb48d2c1e5 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E71C3E1F7D00471EA81E10FB48D2C1E5 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ E71C3E1F7D00471EA81E10FB48D2C1E5 == \E\7\1\C\3\E\1\F\7\D\0\0\4\7\1\E\A\8\1\E\1\0\F\B\4\8\D\2\C\1\E\5 ]] 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 2a95b049-1728-47ca-8e0d-db19468a26c7 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2a95b049172847ca8e0ddb19468a26c7 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2A95B049172847CA8E0DDB19468A26C7 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 2A95B049172847CA8E0DDB19468A26C7 == \2\A\9\5\B\0\4\9\1\7\2\8\4\7\C\A\8\E\0\D\D\B\1\9\4\6\8\A\2\6\C\7 ]] 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 86284 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 86284 ']' 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 86284 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.791 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86284 00:20:55.048 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:55.048 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:55.049 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86284' 00:20:55.049 killing process with pid 86284 00:20:55.049 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 86284 00:20:55.049 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 86284 00:20:55.307 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:55.307 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:55.307 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:55.307 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.307 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:55.307 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.307 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.307 rmmod nvme_tcp 00:20:55.307 rmmod nvme_fabrics 00:20:55.566 rmmod nvme_keyring 00:20:55.566 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.566 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:55.566 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:55.566 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 86259 ']' 00:20:55.566 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 86259 00:20:55.566 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 86259 ']' 00:20:55.566 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 86259 00:20:55.566 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:55.566 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.566 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86259 00:20:55.566 killing process with pid 86259 00:20:55.566 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.566 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.566 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86259' 00:20:55.566 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 86259 00:20:55.566 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 86259 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.825 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.085 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:20:56.085 00:20:56.085 real 0m4.842s 00:20:56.085 user 0m7.461s 00:20:56.085 sys 0m1.409s 00:20:56.085 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.085 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:56.085 ************************************ 00:20:56.085 END TEST nvmf_nsid 00:20:56.085 ************************************ 00:20:56.085 13:14:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:56.085 00:20:56.085 real 7m35.952s 00:20:56.085 user 18m21.861s 00:20:56.085 sys 1m28.840s 00:20:56.085 13:14:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.085 ************************************ 00:20:56.085 END TEST nvmf_target_extra 00:20:56.085 13:14:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:56.085 ************************************ 00:20:56.085 13:14:44 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:56.085 13:14:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:56.085 13:14:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.085 13:14:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:56.085 ************************************ 00:20:56.085 START TEST nvmf_host 00:20:56.085 ************************************ 00:20:56.085 13:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:56.085 * Looking for test storage... 00:20:56.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:20:56.085 13:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:56.085 13:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:56.085 13:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:56.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.408 --rc genhtml_branch_coverage=1 00:20:56.408 --rc genhtml_function_coverage=1 00:20:56.408 --rc genhtml_legend=1 00:20:56.408 --rc geninfo_all_blocks=1 00:20:56.408 --rc geninfo_unexecuted_blocks=1 00:20:56.408 00:20:56.408 ' 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:56.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.408 --rc genhtml_branch_coverage=1 00:20:56.408 --rc genhtml_function_coverage=1 00:20:56.408 --rc genhtml_legend=1 00:20:56.408 --rc geninfo_all_blocks=1 00:20:56.408 --rc geninfo_unexecuted_blocks=1 00:20:56.408 00:20:56.408 ' 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:56.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.408 --rc genhtml_branch_coverage=1 00:20:56.408 --rc genhtml_function_coverage=1 00:20:56.408 --rc genhtml_legend=1 00:20:56.408 --rc geninfo_all_blocks=1 00:20:56.408 --rc geninfo_unexecuted_blocks=1 00:20:56.408 00:20:56.408 ' 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:56.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.408 --rc genhtml_branch_coverage=1 00:20:56.408 --rc genhtml_function_coverage=1 00:20:56.408 --rc genhtml_legend=1 00:20:56.408 --rc geninfo_all_blocks=1 00:20:56.408 --rc geninfo_unexecuted_blocks=1 00:20:56.408 00:20:56.408 ' 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.408 13:14:44 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.409 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.409 ************************************ 00:20:56.409 START TEST nvmf_multicontroller 00:20:56.409 ************************************ 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:56.409 * Looking for test storage... 00:20:56.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:56.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.409 --rc genhtml_branch_coverage=1 00:20:56.409 --rc genhtml_function_coverage=1 00:20:56.409 --rc genhtml_legend=1 00:20:56.409 --rc geninfo_all_blocks=1 00:20:56.409 --rc geninfo_unexecuted_blocks=1 00:20:56.409 00:20:56.409 ' 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:56.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.409 --rc genhtml_branch_coverage=1 00:20:56.409 --rc genhtml_function_coverage=1 00:20:56.409 --rc genhtml_legend=1 00:20:56.409 --rc geninfo_all_blocks=1 00:20:56.409 --rc geninfo_unexecuted_blocks=1 00:20:56.409 00:20:56.409 ' 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:56.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.409 --rc genhtml_branch_coverage=1 00:20:56.409 --rc genhtml_function_coverage=1 00:20:56.409 --rc genhtml_legend=1 00:20:56.409 --rc geninfo_all_blocks=1 00:20:56.409 --rc geninfo_unexecuted_blocks=1 00:20:56.409 00:20:56.409 ' 00:20:56.409 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:56.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.409 --rc genhtml_branch_coverage=1 00:20:56.409 --rc genhtml_function_coverage=1 00:20:56.409 --rc genhtml_legend=1 00:20:56.409 --rc geninfo_all_blocks=1 00:20:56.410 --rc geninfo_unexecuted_blocks=1 00:20:56.410 00:20:56.410 ' 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:20:56.410 13:14:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.410 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:56.410 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.687 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:56.687 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:56.687 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:56.687 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.687 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.687 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.687 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:56.687 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:56.687 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:56.687 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:56.687 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:56.687 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:56.688 Cannot find device "nvmf_init_br" 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:56.688 Cannot find device "nvmf_init_br2" 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:56.688 Cannot find device "nvmf_tgt_br" 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:56.688 Cannot find device "nvmf_tgt_br2" 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:56.688 Cannot find device "nvmf_init_br" 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:56.688 Cannot find device "nvmf_init_br2" 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:56.688 Cannot find device "nvmf_tgt_br" 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:56.688 Cannot find device "nvmf_tgt_br2" 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:56.688 Cannot find device "nvmf_br" 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:56.688 Cannot find device "nvmf_init_if" 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:56.688 Cannot find device "nvmf_init_if2" 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:56.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:56.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:56.688 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:56.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:56.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:20:56.945 00:20:56.945 --- 10.0.0.3 ping statistics --- 00:20:56.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.945 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:56.945 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:56.945 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:20:56.945 00:20:56.945 --- 10.0.0.4 ping statistics --- 00:20:56.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.945 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:56.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:20:56.945 00:20:56.945 --- 10.0.0.1 ping statistics --- 00:20:56.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.945 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:56.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:20:56.945 00:20:56.945 --- 10.0.0.2 ping statistics --- 00:20:56.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.945 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=86659 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 86659 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86659 ']' 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.945 13:14:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.945 [2024-12-06 13:14:45.532870] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:56.945 [2024-12-06 13:14:45.533197] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.202 [2024-12-06 13:14:45.692014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:57.202 [2024-12-06 13:14:45.766987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.202 [2024-12-06 13:14:45.767272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.202 [2024-12-06 13:14:45.767298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.202 [2024-12-06 13:14:45.767309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.202 [2024-12-06 13:14:45.767318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.202 [2024-12-06 13:14:45.768560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.202 [2024-12-06 13:14:45.768678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.202 [2024-12-06 13:14:45.768687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.134 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.134 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:58.134 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:58.134 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.134 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.134 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 [2024-12-06 13:14:46.552303] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 Malloc0 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 [2024-12-06 13:14:46.614724] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 [2024-12-06 13:14:46.622600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 Malloc1 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86711 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86711 /var/tmp/bdevperf.sock 00:20:58.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86711 ']' 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.135 13:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.584 NVMe0n1 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.584 1 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.584 2024/12/06 13:14:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:59.584 request: 00:20:59.584 { 00:20:59.584 "method": "bdev_nvme_attach_controller", 00:20:59.584 "params": { 00:20:59.584 "name": "NVMe0", 00:20:59.584 "trtype": "tcp", 00:20:59.584 "traddr": "10.0.0.3", 00:20:59.584 "adrfam": "ipv4", 00:20:59.584 "trsvcid": "4420", 00:20:59.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.584 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:59.584 "hostaddr": "10.0.0.1", 00:20:59.584 "prchk_reftag": false, 00:20:59.584 "prchk_guard": false, 00:20:59.584 "hdgst": false, 00:20:59.584 "ddgst": false, 00:20:59.584 "allow_unrecognized_csi": false 00:20:59.584 } 00:20:59.584 } 00:20:59.584 Got JSON-RPC error response 00:20:59.584 GoRPCClient: error on JSON-RPC call 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.584 2024/12/06 13:14:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:59.584 request: 00:20:59.584 { 00:20:59.584 "method": "bdev_nvme_attach_controller", 00:20:59.584 "params": { 00:20:59.584 "name": "NVMe0", 00:20:59.584 "trtype": "tcp", 00:20:59.584 "traddr": "10.0.0.3", 00:20:59.584 "adrfam": "ipv4", 00:20:59.584 "trsvcid": "4420", 00:20:59.584 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:59.584 "hostaddr": "10.0.0.1", 00:20:59.584 "prchk_reftag": false, 00:20:59.584 "prchk_guard": false, 00:20:59.584 "hdgst": false, 00:20:59.584 "ddgst": false, 00:20:59.584 "allow_unrecognized_csi": false 00:20:59.584 } 00:20:59.584 } 00:20:59.584 Got JSON-RPC error response 00:20:59.584 GoRPCClient: error on JSON-RPC call 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.584 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.584 2024/12/06 13:14:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:59.584 request: 00:20:59.584 { 00:20:59.584 "method": "bdev_nvme_attach_controller", 00:20:59.584 "params": { 00:20:59.584 "name": "NVMe0", 00:20:59.584 "trtype": "tcp", 00:20:59.584 "traddr": "10.0.0.3", 00:20:59.584 "adrfam": "ipv4", 00:20:59.584 "trsvcid": "4420", 00:20:59.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.584 "hostaddr": "10.0.0.1", 00:20:59.584 "prchk_reftag": false, 00:20:59.584 "prchk_guard": false, 00:20:59.584 "hdgst": false, 00:20:59.584 "ddgst": false, 00:20:59.584 "multipath": "disable", 00:20:59.584 "allow_unrecognized_csi": false 00:20:59.584 } 00:20:59.584 } 00:20:59.584 Got JSON-RPC error response 00:20:59.585 GoRPCClient: error on JSON-RPC call 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.585 2024/12/06 13:14:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:59.585 request: 00:20:59.585 { 00:20:59.585 "method": "bdev_nvme_attach_controller", 00:20:59.585 "params": { 00:20:59.585 "name": "NVMe0", 00:20:59.585 "trtype": "tcp", 00:20:59.585 "traddr": "10.0.0.3", 00:20:59.585 "adrfam": "ipv4", 00:20:59.585 "trsvcid": "4420", 00:20:59.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.585 "hostaddr": "10.0.0.1", 00:20:59.585 "prchk_reftag": false, 00:20:59.585 "prchk_guard": false, 00:20:59.585 "hdgst": false, 00:20:59.585 "ddgst": false, 00:20:59.585 "multipath": "failover", 00:20:59.585 "allow_unrecognized_csi": false 00:20:59.585 } 00:20:59.585 } 00:20:59.585 Got JSON-RPC error response 00:20:59.585 GoRPCClient: error on JSON-RPC call 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.585 13:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.585 NVMe0n1 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.585 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:59.585 13:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:00.962 { 00:21:00.962 "results": [ 00:21:00.962 { 00:21:00.962 "job": "NVMe0n1", 00:21:00.962 "core_mask": "0x1", 00:21:00.962 "workload": "write", 00:21:00.962 "status": "finished", 00:21:00.962 "queue_depth": 128, 00:21:00.962 "io_size": 4096, 00:21:00.962 "runtime": 1.009023, 00:21:00.962 "iops": 18975.781523315127, 00:21:00.962 "mibps": 74.12414657544971, 00:21:00.962 "io_failed": 0, 00:21:00.962 "io_timeout": 0, 00:21:00.962 "avg_latency_us": 6735.536791047257, 00:21:00.962 "min_latency_us": 2442.7054545454544, 00:21:00.962 "max_latency_us": 17992.61090909091 00:21:00.962 } 00:21:00.962 ], 00:21:00.962 "core_count": 1 00:21:00.962 } 00:21:00.962 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:00.962 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.962 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.962 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.962 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:21:00.962 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:00.962 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.962 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.962 nvme1n1 00:21:00.962 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.962 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:21:00.962 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.963 nvme1n1 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 86711 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86711 ']' 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86711 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86711 00:21:00.963 killing process with pid 86711 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86711' 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86711 00:21:00.963 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86711 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:01.222 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:21:01.222 [2024-12-06 13:14:46.741828] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:21:01.222 [2024-12-06 13:14:46.741976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86711 ] 00:21:01.222 [2024-12-06 13:14:46.888064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.222 [2024-12-06 13:14:46.954672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.222 [2024-12-06 13:14:48.101364] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 9f86b14a-e4d5-4f3d-8d71-63c24d025d55 already exists 00:21:01.222 [2024-12-06 13:14:48.101439] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:9f86b14a-e4d5-4f3d-8d71-63c24d025d55 alias for bdev NVMe1n1 00:21:01.222 [2024-12-06 13:14:48.101474] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:01.222 Running I/O for 1 seconds... 00:21:01.222 18954.00 IOPS, 74.04 MiB/s 00:21:01.222 Latency(us) 00:21:01.222 [2024-12-06T13:14:49.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.222 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:01.222 NVMe0n1 : 1.01 18975.78 74.12 0.00 0.00 6735.54 2442.71 17992.61 00:21:01.222 [2024-12-06T13:14:49.862Z] =================================================================================================================== 00:21:01.222 [2024-12-06T13:14:49.862Z] Total : 18975.78 74.12 0.00 0.00 6735.54 2442.71 17992.61 00:21:01.222 Received shutdown signal, test time was about 1.000000 seconds 00:21:01.222 00:21:01.222 Latency(us) 00:21:01.222 [2024-12-06T13:14:49.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.222 [2024-12-06T13:14:49.862Z] =================================================================================================================== 00:21:01.222 [2024-12-06T13:14:49.862Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.222 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:01.222 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.481 rmmod nvme_tcp 00:21:01.481 rmmod nvme_fabrics 00:21:01.481 rmmod nvme_keyring 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 86659 ']' 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 86659 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86659 ']' 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86659 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86659 00:21:01.481 killing process with pid 86659 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86659' 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86659 00:21:01.481 13:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86659 00:21:01.740 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:01.740 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:01.740 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:01.740 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:01.740 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:01.741 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:21:02.000 00:21:02.000 real 0m5.651s 00:21:02.000 user 0m17.110s 00:21:02.000 sys 0m1.203s 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.000 ************************************ 00:21:02.000 END TEST nvmf_multicontroller 00:21:02.000 ************************************ 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.000 ************************************ 00:21:02.000 START TEST nvmf_aer 00:21:02.000 ************************************ 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:02.000 * Looking for test storage... 00:21:02.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:21:02.000 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:02.259 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:02.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.260 --rc genhtml_branch_coverage=1 00:21:02.260 --rc genhtml_function_coverage=1 00:21:02.260 --rc genhtml_legend=1 00:21:02.260 --rc geninfo_all_blocks=1 00:21:02.260 --rc geninfo_unexecuted_blocks=1 00:21:02.260 00:21:02.260 ' 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:02.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.260 --rc genhtml_branch_coverage=1 00:21:02.260 --rc genhtml_function_coverage=1 00:21:02.260 --rc genhtml_legend=1 00:21:02.260 --rc geninfo_all_blocks=1 00:21:02.260 --rc geninfo_unexecuted_blocks=1 00:21:02.260 00:21:02.260 ' 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:02.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.260 --rc genhtml_branch_coverage=1 00:21:02.260 --rc genhtml_function_coverage=1 00:21:02.260 --rc genhtml_legend=1 00:21:02.260 --rc geninfo_all_blocks=1 00:21:02.260 --rc geninfo_unexecuted_blocks=1 00:21:02.260 00:21:02.260 ' 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:02.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.260 --rc genhtml_branch_coverage=1 00:21:02.260 --rc genhtml_function_coverage=1 00:21:02.260 --rc genhtml_legend=1 00:21:02.260 --rc geninfo_all_blocks=1 00:21:02.260 --rc geninfo_unexecuted_blocks=1 00:21:02.260 00:21:02.260 ' 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:02.260 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:02.260 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:02.261 Cannot find device "nvmf_init_br" 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:02.261 Cannot find device "nvmf_init_br2" 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:02.261 Cannot find device "nvmf_tgt_br" 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:02.261 Cannot find device "nvmf_tgt_br2" 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:02.261 Cannot find device "nvmf_init_br" 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:02.261 Cannot find device "nvmf_init_br2" 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:02.261 Cannot find device "nvmf_tgt_br" 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:02.261 Cannot find device "nvmf_tgt_br2" 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:02.261 Cannot find device "nvmf_br" 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:02.261 Cannot find device "nvmf_init_if" 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:02.261 Cannot find device "nvmf_init_if2" 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:21:02.261 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:02.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:02.519 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:21:02.519 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:02.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:02.520 13:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:02.520 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:02.520 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:21:02.520 00:21:02.520 --- 10.0.0.3 ping statistics --- 00:21:02.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.520 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:02.520 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:02.520 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:21:02.520 00:21:02.520 --- 10.0.0.4 ping statistics --- 00:21:02.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.520 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:02.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:21:02.520 00:21:02.520 --- 10.0.0.1 ping statistics --- 00:21:02.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.520 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:02.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:21:02.520 00:21:02.520 --- 10.0.0.2 ping statistics --- 00:21:02.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.520 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.520 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.778 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=87030 00:21:02.778 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:02.778 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 87030 00:21:02.778 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 87030 ']' 00:21:02.778 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.778 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.778 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.778 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.778 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.778 [2024-12-06 13:14:51.215274] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:21:02.778 [2024-12-06 13:14:51.215353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.778 [2024-12-06 13:14:51.368063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:03.037 [2024-12-06 13:14:51.439497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.037 [2024-12-06 13:14:51.439561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.037 [2024-12-06 13:14:51.439576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.037 [2024-12-06 13:14:51.439587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.037 [2024-12-06 13:14:51.439596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.037 [2024-12-06 13:14:51.440861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.037 [2024-12-06 13:14:51.440968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.037 [2024-12-06 13:14:51.441012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.037 [2024-12-06 13:14:51.441020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.037 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.037 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:03.037 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:03.037 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.037 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:03.037 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.037 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:03.037 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.037 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:03.037 [2024-12-06 13:14:51.620054] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.038 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.038 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:03.038 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.038 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:03.038 Malloc0 00:21:03.038 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.038 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:03.038 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.038 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:03.297 [2024-12-06 13:14:51.686951] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:03.297 [ 00:21:03.297 { 00:21:03.297 "allow_any_host": true, 00:21:03.297 "hosts": [], 00:21:03.297 "listen_addresses": [], 00:21:03.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:03.297 "subtype": "Discovery" 00:21:03.297 }, 00:21:03.297 { 00:21:03.297 "allow_any_host": true, 00:21:03.297 "hosts": [], 00:21:03.297 "listen_addresses": [ 00:21:03.297 { 00:21:03.297 "adrfam": "IPv4", 00:21:03.297 "traddr": "10.0.0.3", 00:21:03.297 "trsvcid": "4420", 00:21:03.297 "trtype": "TCP" 00:21:03.297 } 00:21:03.297 ], 00:21:03.297 "max_cntlid": 65519, 00:21:03.297 "max_namespaces": 2, 00:21:03.297 "min_cntlid": 1, 00:21:03.297 "model_number": "SPDK bdev Controller", 00:21:03.297 "namespaces": [ 00:21:03.297 { 00:21:03.297 "bdev_name": "Malloc0", 00:21:03.297 "name": "Malloc0", 00:21:03.297 "nguid": "D81B25A29A7F40C78BAD41182D782B4F", 00:21:03.297 "nsid": 1, 00:21:03.297 "uuid": "d81b25a2-9a7f-40c7-8bad-41182d782b4f" 00:21:03.297 } 00:21:03.297 ], 00:21:03.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.297 "serial_number": "SPDK00000000000001", 00:21:03.297 "subtype": "NVMe" 00:21:03.297 } 00:21:03.297 ] 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=87071 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.297 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:03.556 Malloc1 00:21:03.556 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.556 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:03.556 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.556 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:03.556 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.556 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:03.556 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.556 13:14:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:03.556 [ 00:21:03.556 { 00:21:03.556 "allow_any_host": true, 00:21:03.556 "hosts": [], 00:21:03.556 "listen_addresses": [], 00:21:03.556 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:03.556 "subtype": "Discovery" 00:21:03.556 }, 00:21:03.556 { 00:21:03.556 "allow_any_host": true, 00:21:03.556 "hosts": [], 00:21:03.556 "listen_addresses": [ 00:21:03.556 { 00:21:03.556 "adrfam": "IPv4", 00:21:03.556 Asynchronous Event Request test 00:21:03.556 Attaching to 10.0.0.3 00:21:03.556 Attached to 10.0.0.3 00:21:03.556 Registering asynchronous event callbacks... 00:21:03.556 Starting namespace attribute notice tests for all controllers... 00:21:03.556 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:03.556 aer_cb - Changed Namespace 00:21:03.556 Cleaning up... 00:21:03.556 "traddr": "10.0.0.3", 00:21:03.556 "trsvcid": "4420", 00:21:03.556 "trtype": "TCP" 00:21:03.556 } 00:21:03.556 ], 00:21:03.556 "max_cntlid": 65519, 00:21:03.556 "max_namespaces": 2, 00:21:03.556 "min_cntlid": 1, 00:21:03.556 "model_number": "SPDK bdev Controller", 00:21:03.556 "namespaces": [ 00:21:03.556 { 00:21:03.556 "bdev_name": "Malloc0", 00:21:03.556 "name": "Malloc0", 00:21:03.556 "nguid": "D81B25A29A7F40C78BAD41182D782B4F", 00:21:03.556 "nsid": 1, 00:21:03.556 "uuid": "d81b25a2-9a7f-40c7-8bad-41182d782b4f" 00:21:03.556 }, 00:21:03.556 { 00:21:03.556 "bdev_name": "Malloc1", 00:21:03.556 "name": "Malloc1", 00:21:03.556 "nguid": "99B06161D086403BBB51DAD733AECD5E", 00:21:03.556 "nsid": 2, 00:21:03.556 "uuid": "99b06161-d086-403b-bb51-dad733aecd5e" 00:21:03.556 } 00:21:03.556 ], 00:21:03.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.556 "serial_number": "SPDK00000000000001", 00:21:03.556 "subtype": "NVMe" 00:21:03.556 } 00:21:03.556 ] 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 87071 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:03.556 rmmod nvme_tcp 00:21:03.556 rmmod nvme_fabrics 00:21:03.556 rmmod nvme_keyring 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 87030 ']' 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 87030 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 87030 ']' 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 87030 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.556 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87030 00:21:03.814 killing process with pid 87030 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87030' 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 87030 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 87030 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:03.814 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:21:04.072 00:21:04.072 real 0m2.133s 00:21:04.072 user 0m4.176s 00:21:04.072 sys 0m0.726s 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.072 ************************************ 00:21:04.072 END TEST nvmf_aer 00:21:04.072 ************************************ 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:04.072 13:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.073 13:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.073 ************************************ 00:21:04.073 START TEST nvmf_async_init 00:21:04.073 ************************************ 00:21:04.073 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:04.335 * Looking for test storage... 00:21:04.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:04.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.335 --rc genhtml_branch_coverage=1 00:21:04.335 --rc genhtml_function_coverage=1 00:21:04.335 --rc genhtml_legend=1 00:21:04.335 --rc geninfo_all_blocks=1 00:21:04.335 --rc geninfo_unexecuted_blocks=1 00:21:04.335 00:21:04.335 ' 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:04.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.335 --rc genhtml_branch_coverage=1 00:21:04.335 --rc genhtml_function_coverage=1 00:21:04.335 --rc genhtml_legend=1 00:21:04.335 --rc geninfo_all_blocks=1 00:21:04.335 --rc geninfo_unexecuted_blocks=1 00:21:04.335 00:21:04.335 ' 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:04.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.335 --rc genhtml_branch_coverage=1 00:21:04.335 --rc genhtml_function_coverage=1 00:21:04.335 --rc genhtml_legend=1 00:21:04.335 --rc geninfo_all_blocks=1 00:21:04.335 --rc geninfo_unexecuted_blocks=1 00:21:04.335 00:21:04.335 ' 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:04.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.335 --rc genhtml_branch_coverage=1 00:21:04.335 --rc genhtml_function_coverage=1 00:21:04.335 --rc genhtml_legend=1 00:21:04.335 --rc geninfo_all_blocks=1 00:21:04.335 --rc geninfo_unexecuted_blocks=1 00:21:04.335 00:21:04.335 ' 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.335 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.336 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3bc9f1eec40b4b05a37a0fb56d5e39dd 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:04.336 Cannot find device "nvmf_init_br" 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:04.336 Cannot find device "nvmf_init_br2" 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:04.336 Cannot find device "nvmf_tgt_br" 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:04.336 Cannot find device "nvmf_tgt_br2" 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:04.336 Cannot find device "nvmf_init_br" 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:04.336 Cannot find device "nvmf_init_br2" 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:21:04.336 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:04.602 Cannot find device "nvmf_tgt_br" 00:21:04.602 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:21:04.602 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:04.602 Cannot find device "nvmf_tgt_br2" 00:21:04.602 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:21:04.602 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:04.602 Cannot find device "nvmf_br" 00:21:04.602 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:21:04.602 13:14:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:04.602 Cannot find device "nvmf_init_if" 00:21:04.602 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:21:04.602 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:04.602 Cannot find device "nvmf_init_if2" 00:21:04.602 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:21:04.602 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:04.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.602 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:21:04.602 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:04.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.602 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:21:04.602 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:04.602 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:04.602 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:04.603 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:04.860 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:04.860 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:04.860 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:04.860 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:04.860 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:04.860 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:04.860 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:04.860 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:04.861 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:04.861 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:21:04.861 00:21:04.861 --- 10.0.0.3 ping statistics --- 00:21:04.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.861 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:04.861 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:04.861 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:21:04.861 00:21:04.861 --- 10.0.0.4 ping statistics --- 00:21:04.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.861 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:04.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:04.861 00:21:04.861 --- 10.0.0.1 ping statistics --- 00:21:04.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.861 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:04.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:21:04.861 00:21:04.861 --- 10.0.0.2 ping statistics --- 00:21:04.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.861 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=87294 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 87294 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 87294 ']' 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.861 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:04.861 [2024-12-06 13:14:53.430213] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:21:04.861 [2024-12-06 13:14:53.430323] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.119 [2024-12-06 13:14:53.578920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.119 [2024-12-06 13:14:53.642475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.119 [2024-12-06 13:14:53.642536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.119 [2024-12-06 13:14:53.642548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.119 [2024-12-06 13:14:53.642556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.119 [2024-12-06 13:14:53.642563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.119 [2024-12-06 13:14:53.642965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.377 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.377 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:05.377 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:05.377 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:05.377 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.377 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.377 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:05.377 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.378 [2024-12-06 13:14:53.825896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.378 null0 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3bc9f1eec40b4b05a37a0fb56d5e39dd 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.378 [2024-12-06 13:14:53.882075] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.378 13:14:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.636 nvme0n1 00:21:05.636 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.636 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:05.636 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.636 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.636 [ 00:21:05.636 { 00:21:05.636 "aliases": [ 00:21:05.636 "3bc9f1ee-c40b-4b05-a37a-0fb56d5e39dd" 00:21:05.636 ], 00:21:05.636 "assigned_rate_limits": { 00:21:05.636 "r_mbytes_per_sec": 0, 00:21:05.636 "rw_ios_per_sec": 0, 00:21:05.636 "rw_mbytes_per_sec": 0, 00:21:05.636 "w_mbytes_per_sec": 0 00:21:05.636 }, 00:21:05.636 "block_size": 512, 00:21:05.636 "claimed": false, 00:21:05.636 "driver_specific": { 00:21:05.636 "mp_policy": "active_passive", 00:21:05.636 "nvme": [ 00:21:05.636 { 00:21:05.636 "ctrlr_data": { 00:21:05.636 "ana_reporting": false, 00:21:05.636 "cntlid": 1, 00:21:05.636 "firmware_revision": "25.01", 00:21:05.636 "model_number": "SPDK bdev Controller", 00:21:05.636 "multi_ctrlr": true, 00:21:05.636 "oacs": { 00:21:05.636 "firmware": 0, 00:21:05.636 "format": 0, 00:21:05.636 "ns_manage": 0, 00:21:05.636 "security": 0 00:21:05.636 }, 00:21:05.636 "serial_number": "00000000000000000000", 00:21:05.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.636 "vendor_id": "0x8086" 00:21:05.636 }, 00:21:05.636 "ns_data": { 00:21:05.636 "can_share": true, 00:21:05.636 "id": 1 00:21:05.636 }, 00:21:05.636 "trid": { 00:21:05.636 "adrfam": "IPv4", 00:21:05.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.636 "traddr": "10.0.0.3", 00:21:05.636 "trsvcid": "4420", 00:21:05.636 "trtype": "TCP" 00:21:05.636 }, 00:21:05.636 "vs": { 00:21:05.636 "nvme_version": "1.3" 00:21:05.636 } 00:21:05.636 } 00:21:05.636 ] 00:21:05.636 }, 00:21:05.636 "memory_domains": [ 00:21:05.636 { 00:21:05.636 "dma_device_id": "system", 00:21:05.636 "dma_device_type": 1 00:21:05.636 } 00:21:05.636 ], 00:21:05.636 "name": "nvme0n1", 00:21:05.636 "num_blocks": 2097152, 00:21:05.636 "numa_id": -1, 00:21:05.636 "product_name": "NVMe disk", 00:21:05.636 "supported_io_types": { 00:21:05.636 "abort": true, 00:21:05.636 "compare": true, 00:21:05.636 "compare_and_write": true, 00:21:05.636 "copy": true, 00:21:05.636 "flush": true, 00:21:05.636 "get_zone_info": false, 00:21:05.636 "nvme_admin": true, 00:21:05.636 "nvme_io": true, 00:21:05.636 "nvme_io_md": false, 00:21:05.636 "nvme_iov_md": false, 00:21:05.636 "read": true, 00:21:05.636 "reset": true, 00:21:05.636 "seek_data": false, 00:21:05.636 "seek_hole": false, 00:21:05.636 "unmap": false, 00:21:05.636 "write": true, 00:21:05.636 "write_zeroes": true, 00:21:05.636 "zcopy": false, 00:21:05.636 "zone_append": false, 00:21:05.636 "zone_management": false 00:21:05.636 }, 00:21:05.636 "uuid": "3bc9f1ee-c40b-4b05-a37a-0fb56d5e39dd", 00:21:05.636 "zoned": false 00:21:05.636 } 00:21:05.636 ] 00:21:05.636 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.636 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:05.636 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.636 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.636 [2024-12-06 13:14:54.146282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:05.636 [2024-12-06 13:14:54.146420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f70360 (9): Bad file descriptor 00:21:05.894 [2024-12-06 13:14:54.288715] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:05.894 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.894 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:05.894 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.894 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.894 [ 00:21:05.894 { 00:21:05.894 "aliases": [ 00:21:05.894 "3bc9f1ee-c40b-4b05-a37a-0fb56d5e39dd" 00:21:05.894 ], 00:21:05.894 "assigned_rate_limits": { 00:21:05.894 "r_mbytes_per_sec": 0, 00:21:05.894 "rw_ios_per_sec": 0, 00:21:05.894 "rw_mbytes_per_sec": 0, 00:21:05.894 "w_mbytes_per_sec": 0 00:21:05.894 }, 00:21:05.894 "block_size": 512, 00:21:05.894 "claimed": false, 00:21:05.894 "driver_specific": { 00:21:05.894 "mp_policy": "active_passive", 00:21:05.894 "nvme": [ 00:21:05.894 { 00:21:05.894 "ctrlr_data": { 00:21:05.894 "ana_reporting": false, 00:21:05.894 "cntlid": 2, 00:21:05.894 "firmware_revision": "25.01", 00:21:05.894 "model_number": "SPDK bdev Controller", 00:21:05.894 "multi_ctrlr": true, 00:21:05.894 "oacs": { 00:21:05.894 "firmware": 0, 00:21:05.894 "format": 0, 00:21:05.894 "ns_manage": 0, 00:21:05.894 "security": 0 00:21:05.894 }, 00:21:05.894 "serial_number": "00000000000000000000", 00:21:05.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.894 "vendor_id": "0x8086" 00:21:05.894 }, 00:21:05.894 "ns_data": { 00:21:05.894 "can_share": true, 00:21:05.894 "id": 1 00:21:05.894 }, 00:21:05.894 "trid": { 00:21:05.894 "adrfam": "IPv4", 00:21:05.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.894 "traddr": "10.0.0.3", 00:21:05.894 "trsvcid": "4420", 00:21:05.894 "trtype": "TCP" 00:21:05.894 }, 00:21:05.894 "vs": { 00:21:05.894 "nvme_version": "1.3" 00:21:05.894 } 00:21:05.894 } 00:21:05.894 ] 00:21:05.894 }, 00:21:05.894 "memory_domains": [ 00:21:05.894 { 00:21:05.894 "dma_device_id": "system", 00:21:05.894 "dma_device_type": 1 00:21:05.894 } 00:21:05.894 ], 00:21:05.894 "name": "nvme0n1", 00:21:05.894 "num_blocks": 2097152, 00:21:05.894 "numa_id": -1, 00:21:05.894 "product_name": "NVMe disk", 00:21:05.894 "supported_io_types": { 00:21:05.894 "abort": true, 00:21:05.894 "compare": true, 00:21:05.894 "compare_and_write": true, 00:21:05.894 "copy": true, 00:21:05.894 "flush": true, 00:21:05.894 "get_zone_info": false, 00:21:05.894 "nvme_admin": true, 00:21:05.894 "nvme_io": true, 00:21:05.894 "nvme_io_md": false, 00:21:05.894 "nvme_iov_md": false, 00:21:05.894 "read": true, 00:21:05.894 "reset": true, 00:21:05.894 "seek_data": false, 00:21:05.894 "seek_hole": false, 00:21:05.894 "unmap": false, 00:21:05.894 "write": true, 00:21:05.894 "write_zeroes": true, 00:21:05.894 "zcopy": false, 00:21:05.894 "zone_append": false, 00:21:05.894 "zone_management": false 00:21:05.895 }, 00:21:05.895 "uuid": "3bc9f1ee-c40b-4b05-a37a-0fb56d5e39dd", 00:21:05.895 "zoned": false 00:21:05.895 } 00:21:05.895 ] 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Lfn8eNTTaW 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Lfn8eNTTaW 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Lfn8eNTTaW 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.895 [2024-12-06 13:14:54.362469] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.895 [2024-12-06 13:14:54.362677] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.895 [2024-12-06 13:14:54.378470] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.895 nvme0n1 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.895 [ 00:21:05.895 { 00:21:05.895 "aliases": [ 00:21:05.895 "3bc9f1ee-c40b-4b05-a37a-0fb56d5e39dd" 00:21:05.895 ], 00:21:05.895 "assigned_rate_limits": { 00:21:05.895 "r_mbytes_per_sec": 0, 00:21:05.895 "rw_ios_per_sec": 0, 00:21:05.895 "rw_mbytes_per_sec": 0, 00:21:05.895 "w_mbytes_per_sec": 0 00:21:05.895 }, 00:21:05.895 "block_size": 512, 00:21:05.895 "claimed": false, 00:21:05.895 "driver_specific": { 00:21:05.895 "mp_policy": "active_passive", 00:21:05.895 "nvme": [ 00:21:05.895 { 00:21:05.895 "ctrlr_data": { 00:21:05.895 "ana_reporting": false, 00:21:05.895 "cntlid": 3, 00:21:05.895 "firmware_revision": "25.01", 00:21:05.895 "model_number": "SPDK bdev Controller", 00:21:05.895 "multi_ctrlr": true, 00:21:05.895 "oacs": { 00:21:05.895 "firmware": 0, 00:21:05.895 "format": 0, 00:21:05.895 "ns_manage": 0, 00:21:05.895 "security": 0 00:21:05.895 }, 00:21:05.895 "serial_number": "00000000000000000000", 00:21:05.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.895 "vendor_id": "0x8086" 00:21:05.895 }, 00:21:05.895 "ns_data": { 00:21:05.895 "can_share": true, 00:21:05.895 "id": 1 00:21:05.895 }, 00:21:05.895 "trid": { 00:21:05.895 "adrfam": "IPv4", 00:21:05.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.895 "traddr": "10.0.0.3", 00:21:05.895 "trsvcid": "4421", 00:21:05.895 "trtype": "TCP" 00:21:05.895 }, 00:21:05.895 "vs": { 00:21:05.895 "nvme_version": "1.3" 00:21:05.895 } 00:21:05.895 } 00:21:05.895 ] 00:21:05.895 }, 00:21:05.895 "memory_domains": [ 00:21:05.895 { 00:21:05.895 "dma_device_id": "system", 00:21:05.895 "dma_device_type": 1 00:21:05.895 } 00:21:05.895 ], 00:21:05.895 "name": "nvme0n1", 00:21:05.895 "num_blocks": 2097152, 00:21:05.895 "numa_id": -1, 00:21:05.895 "product_name": "NVMe disk", 00:21:05.895 "supported_io_types": { 00:21:05.895 "abort": true, 00:21:05.895 "compare": true, 00:21:05.895 "compare_and_write": true, 00:21:05.895 "copy": true, 00:21:05.895 "flush": true, 00:21:05.895 "get_zone_info": false, 00:21:05.895 "nvme_admin": true, 00:21:05.895 "nvme_io": true, 00:21:05.895 "nvme_io_md": false, 00:21:05.895 "nvme_iov_md": false, 00:21:05.895 "read": true, 00:21:05.895 "reset": true, 00:21:05.895 "seek_data": false, 00:21:05.895 "seek_hole": false, 00:21:05.895 "unmap": false, 00:21:05.895 "write": true, 00:21:05.895 "write_zeroes": true, 00:21:05.895 "zcopy": false, 00:21:05.895 "zone_append": false, 00:21:05.895 "zone_management": false 00:21:05.895 }, 00:21:05.895 "uuid": "3bc9f1ee-c40b-4b05-a37a-0fb56d5e39dd", 00:21:05.895 "zoned": false 00:21:05.895 } 00:21:05.895 ] 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Lfn8eNTTaW 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.895 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:06.152 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:06.152 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:06.153 rmmod nvme_tcp 00:21:06.153 rmmod nvme_fabrics 00:21:06.153 rmmod nvme_keyring 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 87294 ']' 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 87294 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 87294 ']' 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 87294 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87294 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:06.153 killing process with pid 87294 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87294' 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 87294 00:21:06.153 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 87294 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:06.411 13:14:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:06.411 13:14:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:06.411 13:14:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:06.411 13:14:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:06.411 13:14:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.411 13:14:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.411 13:14:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:21:06.669 00:21:06.669 real 0m2.393s 00:21:06.669 user 0m1.750s 00:21:06.669 sys 0m0.726s 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.669 ************************************ 00:21:06.669 END TEST nvmf_async_init 00:21:06.669 ************************************ 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.669 ************************************ 00:21:06.669 START TEST dma 00:21:06.669 ************************************ 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:06.669 * Looking for test storage... 00:21:06.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.669 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:06.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.927 --rc genhtml_branch_coverage=1 00:21:06.927 --rc genhtml_function_coverage=1 00:21:06.927 --rc genhtml_legend=1 00:21:06.927 --rc geninfo_all_blocks=1 00:21:06.927 --rc geninfo_unexecuted_blocks=1 00:21:06.927 00:21:06.927 ' 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:06.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.927 --rc genhtml_branch_coverage=1 00:21:06.927 --rc genhtml_function_coverage=1 00:21:06.927 --rc genhtml_legend=1 00:21:06.927 --rc geninfo_all_blocks=1 00:21:06.927 --rc geninfo_unexecuted_blocks=1 00:21:06.927 00:21:06.927 ' 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:06.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.927 --rc genhtml_branch_coverage=1 00:21:06.927 --rc genhtml_function_coverage=1 00:21:06.927 --rc genhtml_legend=1 00:21:06.927 --rc geninfo_all_blocks=1 00:21:06.927 --rc geninfo_unexecuted_blocks=1 00:21:06.927 00:21:06.927 ' 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:06.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.927 --rc genhtml_branch_coverage=1 00:21:06.927 --rc genhtml_function_coverage=1 00:21:06.927 --rc genhtml_legend=1 00:21:06.927 --rc geninfo_all_blocks=1 00:21:06.927 --rc geninfo_unexecuted_blocks=1 00:21:06.927 00:21:06.927 ' 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:06.927 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:06.927 00:21:06.927 real 0m0.210s 00:21:06.927 user 0m0.128s 00:21:06.927 sys 0m0.093s 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:06.927 ************************************ 00:21:06.927 END TEST dma 00:21:06.927 ************************************ 00:21:06.927 13:14:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.928 ************************************ 00:21:06.928 START TEST nvmf_identify 00:21:06.928 ************************************ 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:06.928 * Looking for test storage... 00:21:06.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.928 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:07.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.185 --rc genhtml_branch_coverage=1 00:21:07.185 --rc genhtml_function_coverage=1 00:21:07.185 --rc genhtml_legend=1 00:21:07.185 --rc geninfo_all_blocks=1 00:21:07.185 --rc geninfo_unexecuted_blocks=1 00:21:07.185 00:21:07.185 ' 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:07.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.185 --rc genhtml_branch_coverage=1 00:21:07.185 --rc genhtml_function_coverage=1 00:21:07.185 --rc genhtml_legend=1 00:21:07.185 --rc geninfo_all_blocks=1 00:21:07.185 --rc geninfo_unexecuted_blocks=1 00:21:07.185 00:21:07.185 ' 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:07.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.185 --rc genhtml_branch_coverage=1 00:21:07.185 --rc genhtml_function_coverage=1 00:21:07.185 --rc genhtml_legend=1 00:21:07.185 --rc geninfo_all_blocks=1 00:21:07.185 --rc geninfo_unexecuted_blocks=1 00:21:07.185 00:21:07.185 ' 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:07.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.185 --rc genhtml_branch_coverage=1 00:21:07.185 --rc genhtml_function_coverage=1 00:21:07.185 --rc genhtml_legend=1 00:21:07.185 --rc geninfo_all_blocks=1 00:21:07.185 --rc geninfo_unexecuted_blocks=1 00:21:07.185 00:21:07.185 ' 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.185 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:07.186 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:07.186 Cannot find device "nvmf_init_br" 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:07.186 Cannot find device "nvmf_init_br2" 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:07.186 Cannot find device "nvmf_tgt_br" 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:07.186 Cannot find device "nvmf_tgt_br2" 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:07.186 Cannot find device "nvmf_init_br" 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:07.186 Cannot find device "nvmf_init_br2" 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:07.186 Cannot find device "nvmf_tgt_br" 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:07.186 Cannot find device "nvmf_tgt_br2" 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:07.186 Cannot find device "nvmf_br" 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:07.186 Cannot find device "nvmf_init_if" 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:07.186 Cannot find device "nvmf_init_if2" 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:07.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:07.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:07.186 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:07.444 13:14:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:07.444 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:07.444 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:07.444 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:07.444 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:07.444 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:07.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:07.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 00:21:07.444 00:21:07.444 --- 10.0.0.3 ping statistics --- 00:21:07.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.444 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:21:07.444 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:07.444 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:07.444 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:21:07.444 00:21:07.444 --- 10.0.0.4 ping statistics --- 00:21:07.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.444 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:21:07.444 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:07.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:21:07.445 00:21:07.445 --- 10.0.0.1 ping statistics --- 00:21:07.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.445 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:07.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:21:07.445 00:21:07.445 --- 10.0.0.2 ping statistics --- 00:21:07.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.445 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87612 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87612 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 87612 ']' 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.445 13:14:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:07.703 [2024-12-06 13:14:56.116344] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:21:07.703 [2024-12-06 13:14:56.116471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.703 [2024-12-06 13:14:56.267623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.703 [2024-12-06 13:14:56.335059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.703 [2024-12-06 13:14:56.335127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.703 [2024-12-06 13:14:56.335140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.703 [2024-12-06 13:14:56.335149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.703 [2024-12-06 13:14:56.335156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.703 [2024-12-06 13:14:56.336446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.704 [2024-12-06 13:14:56.336530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.704 [2024-12-06 13:14:56.336596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.704 [2024-12-06 13:14:56.336596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.639 [2024-12-06 13:14:57.177152] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.639 Malloc0 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.639 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.898 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.898 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:08.898 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.898 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.899 [2024-12-06 13:14:57.286544] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:08.899 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.899 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:08.899 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.899 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.899 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.899 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:08.899 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.899 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.899 [ 00:21:08.899 { 00:21:08.899 "allow_any_host": true, 00:21:08.899 "hosts": [], 00:21:08.899 "listen_addresses": [ 00:21:08.899 { 00:21:08.899 "adrfam": "IPv4", 00:21:08.899 "traddr": "10.0.0.3", 00:21:08.899 "trsvcid": "4420", 00:21:08.899 "trtype": "TCP" 00:21:08.899 } 00:21:08.899 ], 00:21:08.899 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:08.899 "subtype": "Discovery" 00:21:08.899 }, 00:21:08.899 { 00:21:08.899 "allow_any_host": true, 00:21:08.899 "hosts": [], 00:21:08.899 "listen_addresses": [ 00:21:08.899 { 00:21:08.899 "adrfam": "IPv4", 00:21:08.899 "traddr": "10.0.0.3", 00:21:08.899 "trsvcid": "4420", 00:21:08.899 "trtype": "TCP" 00:21:08.899 } 00:21:08.899 ], 00:21:08.899 "max_cntlid": 65519, 00:21:08.899 "max_namespaces": 32, 00:21:08.899 "min_cntlid": 1, 00:21:08.899 "model_number": "SPDK bdev Controller", 00:21:08.899 "namespaces": [ 00:21:08.899 { 00:21:08.899 "bdev_name": "Malloc0", 00:21:08.899 "eui64": "ABCDEF0123456789", 00:21:08.899 "name": "Malloc0", 00:21:08.899 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:08.899 "nsid": 1, 00:21:08.899 "uuid": "1d91d439-dd49-4a18-8bdb-326d8a1929a7" 00:21:08.899 } 00:21:08.899 ], 00:21:08.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.899 "serial_number": "SPDK00000000000001", 00:21:08.899 "subtype": "NVMe" 00:21:08.899 } 00:21:08.899 ] 00:21:08.899 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.899 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:08.899 [2024-12-06 13:14:57.341130] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:21:08.899 [2024-12-06 13:14:57.341194] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87665 ] 00:21:08.899 [2024-12-06 13:14:57.509474] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:08.899 [2024-12-06 13:14:57.509563] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:08.899 [2024-12-06 13:14:57.509572] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:08.899 [2024-12-06 13:14:57.509589] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:08.899 [2024-12-06 13:14:57.509602] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:08.899 [2024-12-06 13:14:57.510064] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:08.899 [2024-12-06 13:14:57.510131] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x103dd90 0 00:21:08.899 [2024-12-06 13:14:57.514488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:08.899 [2024-12-06 13:14:57.514520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:08.899 [2024-12-06 13:14:57.514528] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:08.899 [2024-12-06 13:14:57.514533] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:08.899 [2024-12-06 13:14:57.514576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.514586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.514592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103dd90) 00:21:08.899 [2024-12-06 13:14:57.514612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:08.899 [2024-12-06 13:14:57.514652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107e600, cid 0, qid 0 00:21:08.899 [2024-12-06 13:14:57.522482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.899 [2024-12-06 13:14:57.522513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.899 [2024-12-06 13:14:57.522520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.522527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107e600) on tqpair=0x103dd90 00:21:08.899 [2024-12-06 13:14:57.522546] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:08.899 [2024-12-06 13:14:57.522558] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:08.899 [2024-12-06 13:14:57.522567] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:08.899 [2024-12-06 13:14:57.522592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.522600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.522606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103dd90) 00:21:08.899 [2024-12-06 13:14:57.522620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.899 [2024-12-06 13:14:57.522659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107e600, cid 0, qid 0 00:21:08.899 [2024-12-06 13:14:57.522735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.899 [2024-12-06 13:14:57.522744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.899 [2024-12-06 13:14:57.522750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.522755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107e600) on tqpair=0x103dd90 00:21:08.899 [2024-12-06 13:14:57.522763] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:08.899 [2024-12-06 13:14:57.522774] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:08.899 [2024-12-06 13:14:57.522785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.522791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.522796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103dd90) 00:21:08.899 [2024-12-06 13:14:57.522806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.899 [2024-12-06 13:14:57.522831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107e600, cid 0, qid 0 00:21:08.899 [2024-12-06 13:14:57.522884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.899 [2024-12-06 13:14:57.522905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.899 [2024-12-06 13:14:57.522912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.522917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107e600) on tqpair=0x103dd90 00:21:08.899 [2024-12-06 13:14:57.522925] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:08.899 [2024-12-06 13:14:57.522937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:08.899 [2024-12-06 13:14:57.522948] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.522953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.522958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103dd90) 00:21:08.899 [2024-12-06 13:14:57.522968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.899 [2024-12-06 13:14:57.522994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107e600, cid 0, qid 0 00:21:08.899 [2024-12-06 13:14:57.523046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.899 [2024-12-06 13:14:57.523055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.899 [2024-12-06 13:14:57.523060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.523065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107e600) on tqpair=0x103dd90 00:21:08.899 [2024-12-06 13:14:57.523073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:08.899 [2024-12-06 13:14:57.523095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.523102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.523107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103dd90) 00:21:08.899 [2024-12-06 13:14:57.523116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.899 [2024-12-06 13:14:57.523138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107e600, cid 0, qid 0 00:21:08.899 [2024-12-06 13:14:57.523190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.899 [2024-12-06 13:14:57.523199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.899 [2024-12-06 13:14:57.523204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.899 [2024-12-06 13:14:57.523211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107e600) on tqpair=0x103dd90 00:21:08.899 [2024-12-06 13:14:57.523221] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:08.899 [2024-12-06 13:14:57.523232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:08.899 [2024-12-06 13:14:57.523244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:08.900 [2024-12-06 13:14:57.523359] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:08.900 [2024-12-06 13:14:57.523373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:08.900 [2024-12-06 13:14:57.523387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.523393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.523399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103dd90) 00:21:08.900 [2024-12-06 13:14:57.523409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.900 [2024-12-06 13:14:57.523436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107e600, cid 0, qid 0 00:21:08.900 [2024-12-06 13:14:57.523509] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.900 [2024-12-06 13:14:57.523520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.900 [2024-12-06 13:14:57.523525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.523531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107e600) on tqpair=0x103dd90 00:21:08.900 [2024-12-06 13:14:57.523538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:08.900 [2024-12-06 13:14:57.523552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.523558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.523563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103dd90) 00:21:08.900 [2024-12-06 13:14:57.523574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.900 [2024-12-06 13:14:57.523599] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107e600, cid 0, qid 0 00:21:08.900 [2024-12-06 13:14:57.523650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.900 [2024-12-06 13:14:57.523660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.900 [2024-12-06 13:14:57.523665] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.523670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107e600) on tqpair=0x103dd90 00:21:08.900 [2024-12-06 13:14:57.523677] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:08.900 [2024-12-06 13:14:57.523684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:08.900 [2024-12-06 13:14:57.523699] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:08.900 [2024-12-06 13:14:57.523713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:08.900 [2024-12-06 13:14:57.523727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.523733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103dd90) 00:21:08.900 [2024-12-06 13:14:57.523743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.900 [2024-12-06 13:14:57.523768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107e600, cid 0, qid 0 00:21:08.900 [2024-12-06 13:14:57.523868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:08.900 [2024-12-06 13:14:57.523878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:08.900 [2024-12-06 13:14:57.523883] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.523888] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x103dd90): datao=0, datal=4096, cccid=0 00:21:08.900 [2024-12-06 13:14:57.523895] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x107e600) on tqpair(0x103dd90): expected_datao=0, payload_size=4096 00:21:08.900 [2024-12-06 13:14:57.523901] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.523913] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.523918] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.523929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.900 [2024-12-06 13:14:57.523937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.900 [2024-12-06 13:14:57.523942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.523948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107e600) on tqpair=0x103dd90 00:21:08.900 [2024-12-06 13:14:57.523960] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:08.900 [2024-12-06 13:14:57.523968] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:08.900 [2024-12-06 13:14:57.523975] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:08.900 [2024-12-06 13:14:57.523983] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:08.900 [2024-12-06 13:14:57.523990] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:08.900 [2024-12-06 13:14:57.523997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:08.900 [2024-12-06 13:14:57.524008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:08.900 [2024-12-06 13:14:57.524018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103dd90) 00:21:08.900 [2024-12-06 13:14:57.524040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:08.900 [2024-12-06 13:14:57.524066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107e600, cid 0, qid 0 00:21:08.900 [2024-12-06 13:14:57.524132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.900 [2024-12-06 13:14:57.524152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.900 [2024-12-06 13:14:57.524158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107e600) on tqpair=0x103dd90 00:21:08.900 [2024-12-06 13:14:57.524180] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x103dd90) 00:21:08.900 [2024-12-06 13:14:57.524201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.900 [2024-12-06 13:14:57.524210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x103dd90) 00:21:08.900 [2024-12-06 13:14:57.524229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.900 [2024-12-06 13:14:57.524237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x103dd90) 00:21:08.900 [2024-12-06 13:14:57.524255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.900 [2024-12-06 13:14:57.524264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:08.900 [2024-12-06 13:14:57.524282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.900 [2024-12-06 13:14:57.524289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:08.900 [2024-12-06 13:14:57.524300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:08.900 [2024-12-06 13:14:57.524310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x103dd90) 00:21:08.900 [2024-12-06 13:14:57.524325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.900 [2024-12-06 13:14:57.524354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107e600, cid 0, qid 0 00:21:08.900 [2024-12-06 13:14:57.524363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107e780, cid 1, qid 0 00:21:08.900 [2024-12-06 13:14:57.524369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107e900, cid 2, qid 0 00:21:08.900 [2024-12-06 13:14:57.524376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:08.900 [2024-12-06 13:14:57.524382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ec00, cid 4, qid 0 00:21:08.900 [2024-12-06 13:14:57.524483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.900 [2024-12-06 13:14:57.524499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.900 [2024-12-06 13:14:57.524504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ec00) on tqpair=0x103dd90 00:21:08.900 [2024-12-06 13:14:57.524518] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:08.900 [2024-12-06 13:14:57.524534] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:08.900 [2024-12-06 13:14:57.524551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x103dd90) 00:21:08.900 [2024-12-06 13:14:57.524575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.900 [2024-12-06 13:14:57.524606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ec00, cid 4, qid 0 00:21:08.900 [2024-12-06 13:14:57.524676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:08.900 [2024-12-06 13:14:57.524685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:08.900 [2024-12-06 13:14:57.524690] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:08.900 [2024-12-06 13:14:57.524696] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x103dd90): datao=0, datal=4096, cccid=4 00:21:08.901 [2024-12-06 13:14:57.524702] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x107ec00) on tqpair(0x103dd90): expected_datao=0, payload_size=4096 00:21:08.901 [2024-12-06 13:14:57.524708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.901 [2024-12-06 13:14:57.524717] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:08.901 [2024-12-06 13:14:57.524723] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:08.901 [2024-12-06 13:14:57.524734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.901 [2024-12-06 13:14:57.524742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.901 [2024-12-06 13:14:57.524747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.901 [2024-12-06 13:14:57.524752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ec00) on tqpair=0x103dd90 00:21:08.901 [2024-12-06 13:14:57.524771] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:08.901 [2024-12-06 13:14:57.524810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.901 [2024-12-06 13:14:57.524818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x103dd90) 00:21:08.901 [2024-12-06 13:14:57.524828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.901 [2024-12-06 13:14:57.524838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.901 [2024-12-06 13:14:57.524844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.901 [2024-12-06 13:14:57.524849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x103dd90) 00:21:08.901 [2024-12-06 13:14:57.524857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.901 [2024-12-06 13:14:57.524891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ec00, cid 4, qid 0 00:21:08.901 [2024-12-06 13:14:57.524902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ed80, cid 5, qid 0 00:21:08.901 [2024-12-06 13:14:57.525017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:08.901 [2024-12-06 13:14:57.525026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:08.901 [2024-12-06 13:14:57.525031] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:08.901 [2024-12-06 13:14:57.525036] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x103dd90): datao=0, datal=1024, cccid=4 00:21:08.901 [2024-12-06 13:14:57.525042] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x107ec00) on tqpair(0x103dd90): expected_datao=0, payload_size=1024 00:21:08.901 [2024-12-06 13:14:57.525048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.901 [2024-12-06 13:14:57.525057] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:08.901 [2024-12-06 13:14:57.525063] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:08.901 [2024-12-06 13:14:57.525071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.901 [2024-12-06 13:14:57.525079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.901 [2024-12-06 13:14:57.525083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.901 [2024-12-06 13:14:57.525089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ed80) on tqpair=0x103dd90 00:21:09.161 [2024-12-06 13:14:57.570484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.161 [2024-12-06 13:14:57.570550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.161 [2024-12-06 13:14:57.570560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.161 [2024-12-06 13:14:57.570569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ec00) on tqpair=0x103dd90 00:21:09.161 [2024-12-06 13:14:57.570605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.161 [2024-12-06 13:14:57.570613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x103dd90) 00:21:09.161 [2024-12-06 13:14:57.570632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.161 [2024-12-06 13:14:57.570686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ec00, cid 4, qid 0 00:21:09.161 [2024-12-06 13:14:57.570809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.161 [2024-12-06 13:14:57.570818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.161 [2024-12-06 13:14:57.570824] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.161 [2024-12-06 13:14:57.570829] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x103dd90): datao=0, datal=3072, cccid=4 00:21:09.161 [2024-12-06 13:14:57.570835] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x107ec00) on tqpair(0x103dd90): expected_datao=0, payload_size=3072 00:21:09.161 [2024-12-06 13:14:57.570842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.161 [2024-12-06 13:14:57.570854] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.161 [2024-12-06 13:14:57.570860] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.161 [2024-12-06 13:14:57.570871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.161 [2024-12-06 13:14:57.570879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.161 [2024-12-06 13:14:57.570884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.161 [2024-12-06 13:14:57.570890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ec00) on tqpair=0x103dd90 00:21:09.161 [2024-12-06 13:14:57.570905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.161 [2024-12-06 13:14:57.570912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x103dd90) 00:21:09.161 [2024-12-06 13:14:57.570922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.161 [2024-12-06 13:14:57.570963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ec00, cid 4, qid 0 00:21:09.161 [2024-12-06 13:14:57.571038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.161 [2024-12-06 13:14:57.571049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.161 [2024-12-06 13:14:57.571057] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.161 [2024-12-06 13:14:57.571066] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x103dd90): datao=0, datal=8, cccid=4 00:21:09.161 [2024-12-06 13:14:57.571076] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x107ec00) on tqpair(0x103dd90): expected_datao=0, payload_size=8 00:21:09.161 [2024-12-06 13:14:57.571085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.161 [2024-12-06 13:14:57.571097] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.161 [2024-12-06 13:14:57.571102] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.161 [2024-12-06 13:14:57.612569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.161 [2024-12-06 13:14:57.612611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.161 [2024-12-06 13:14:57.612617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.161 [2024-12-06 13:14:57.612624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ec00) on tqpair=0x103dd90 00:21:09.161 ===================================================== 00:21:09.161 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:09.161 ===================================================== 00:21:09.161 Controller Capabilities/Features 00:21:09.161 ================================ 00:21:09.161 Vendor ID: 0000 00:21:09.161 Subsystem Vendor ID: 0000 00:21:09.161 Serial Number: .................... 00:21:09.161 Model Number: ........................................ 00:21:09.161 Firmware Version: 25.01 00:21:09.161 Recommended Arb Burst: 0 00:21:09.161 IEEE OUI Identifier: 00 00 00 00:21:09.161 Multi-path I/O 00:21:09.161 May have multiple subsystem ports: No 00:21:09.161 May have multiple controllers: No 00:21:09.161 Associated with SR-IOV VF: No 00:21:09.161 Max Data Transfer Size: 131072 00:21:09.161 Max Number of Namespaces: 0 00:21:09.161 Max Number of I/O Queues: 1024 00:21:09.161 NVMe Specification Version (VS): 1.3 00:21:09.161 NVMe Specification Version (Identify): 1.3 00:21:09.161 Maximum Queue Entries: 128 00:21:09.161 Contiguous Queues Required: Yes 00:21:09.161 Arbitration Mechanisms Supported 00:21:09.161 Weighted Round Robin: Not Supported 00:21:09.161 Vendor Specific: Not Supported 00:21:09.161 Reset Timeout: 15000 ms 00:21:09.161 Doorbell Stride: 4 bytes 00:21:09.161 NVM Subsystem Reset: Not Supported 00:21:09.161 Command Sets Supported 00:21:09.161 NVM Command Set: Supported 00:21:09.161 Boot Partition: Not Supported 00:21:09.161 Memory Page Size Minimum: 4096 bytes 00:21:09.161 Memory Page Size Maximum: 4096 bytes 00:21:09.161 Persistent Memory Region: Not Supported 00:21:09.161 Optional Asynchronous Events Supported 00:21:09.161 Namespace Attribute Notices: Not Supported 00:21:09.161 Firmware Activation Notices: Not Supported 00:21:09.161 ANA Change Notices: Not Supported 00:21:09.161 PLE Aggregate Log Change Notices: Not Supported 00:21:09.161 LBA Status Info Alert Notices: Not Supported 00:21:09.161 EGE Aggregate Log Change Notices: Not Supported 00:21:09.161 Normal NVM Subsystem Shutdown event: Not Supported 00:21:09.161 Zone Descriptor Change Notices: Not Supported 00:21:09.161 Discovery Log Change Notices: Supported 00:21:09.161 Controller Attributes 00:21:09.161 128-bit Host Identifier: Not Supported 00:21:09.161 Non-Operational Permissive Mode: Not Supported 00:21:09.161 NVM Sets: Not Supported 00:21:09.161 Read Recovery Levels: Not Supported 00:21:09.161 Endurance Groups: Not Supported 00:21:09.161 Predictable Latency Mode: Not Supported 00:21:09.161 Traffic Based Keep ALive: Not Supported 00:21:09.161 Namespace Granularity: Not Supported 00:21:09.161 SQ Associations: Not Supported 00:21:09.161 UUID List: Not Supported 00:21:09.161 Multi-Domain Subsystem: Not Supported 00:21:09.161 Fixed Capacity Management: Not Supported 00:21:09.161 Variable Capacity Management: Not Supported 00:21:09.161 Delete Endurance Group: Not Supported 00:21:09.161 Delete NVM Set: Not Supported 00:21:09.161 Extended LBA Formats Supported: Not Supported 00:21:09.161 Flexible Data Placement Supported: Not Supported 00:21:09.161 00:21:09.161 Controller Memory Buffer Support 00:21:09.161 ================================ 00:21:09.161 Supported: No 00:21:09.161 00:21:09.161 Persistent Memory Region Support 00:21:09.161 ================================ 00:21:09.161 Supported: No 00:21:09.161 00:21:09.161 Admin Command Set Attributes 00:21:09.161 ============================ 00:21:09.161 Security Send/Receive: Not Supported 00:21:09.161 Format NVM: Not Supported 00:21:09.161 Firmware Activate/Download: Not Supported 00:21:09.161 Namespace Management: Not Supported 00:21:09.161 Device Self-Test: Not Supported 00:21:09.161 Directives: Not Supported 00:21:09.161 NVMe-MI: Not Supported 00:21:09.161 Virtualization Management: Not Supported 00:21:09.161 Doorbell Buffer Config: Not Supported 00:21:09.161 Get LBA Status Capability: Not Supported 00:21:09.161 Command & Feature Lockdown Capability: Not Supported 00:21:09.161 Abort Command Limit: 1 00:21:09.161 Async Event Request Limit: 4 00:21:09.161 Number of Firmware Slots: N/A 00:21:09.161 Firmware Slot 1 Read-Only: N/A 00:21:09.161 Firmware Activation Without Reset: N/A 00:21:09.161 Multiple Update Detection Support: N/A 00:21:09.161 Firmware Update Granularity: No Information Provided 00:21:09.161 Per-Namespace SMART Log: No 00:21:09.161 Asymmetric Namespace Access Log Page: Not Supported 00:21:09.161 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:09.161 Command Effects Log Page: Not Supported 00:21:09.162 Get Log Page Extended Data: Supported 00:21:09.162 Telemetry Log Pages: Not Supported 00:21:09.162 Persistent Event Log Pages: Not Supported 00:21:09.162 Supported Log Pages Log Page: May Support 00:21:09.162 Commands Supported & Effects Log Page: Not Supported 00:21:09.162 Feature Identifiers & Effects Log Page:May Support 00:21:09.162 NVMe-MI Commands & Effects Log Page: May Support 00:21:09.162 Data Area 4 for Telemetry Log: Not Supported 00:21:09.162 Error Log Page Entries Supported: 128 00:21:09.162 Keep Alive: Not Supported 00:21:09.162 00:21:09.162 NVM Command Set Attributes 00:21:09.162 ========================== 00:21:09.162 Submission Queue Entry Size 00:21:09.162 Max: 1 00:21:09.162 Min: 1 00:21:09.162 Completion Queue Entry Size 00:21:09.162 Max: 1 00:21:09.162 Min: 1 00:21:09.162 Number of Namespaces: 0 00:21:09.162 Compare Command: Not Supported 00:21:09.162 Write Uncorrectable Command: Not Supported 00:21:09.162 Dataset Management Command: Not Supported 00:21:09.162 Write Zeroes Command: Not Supported 00:21:09.162 Set Features Save Field: Not Supported 00:21:09.162 Reservations: Not Supported 00:21:09.162 Timestamp: Not Supported 00:21:09.162 Copy: Not Supported 00:21:09.162 Volatile Write Cache: Not Present 00:21:09.162 Atomic Write Unit (Normal): 1 00:21:09.162 Atomic Write Unit (PFail): 1 00:21:09.162 Atomic Compare & Write Unit: 1 00:21:09.162 Fused Compare & Write: Supported 00:21:09.162 Scatter-Gather List 00:21:09.162 SGL Command Set: Supported 00:21:09.162 SGL Keyed: Supported 00:21:09.162 SGL Bit Bucket Descriptor: Not Supported 00:21:09.162 SGL Metadata Pointer: Not Supported 00:21:09.162 Oversized SGL: Not Supported 00:21:09.162 SGL Metadata Address: Not Supported 00:21:09.162 SGL Offset: Supported 00:21:09.162 Transport SGL Data Block: Not Supported 00:21:09.162 Replay Protected Memory Block: Not Supported 00:21:09.162 00:21:09.162 Firmware Slot Information 00:21:09.162 ========================= 00:21:09.162 Active slot: 0 00:21:09.162 00:21:09.162 00:21:09.162 Error Log 00:21:09.162 ========= 00:21:09.162 00:21:09.162 Active Namespaces 00:21:09.162 ================= 00:21:09.162 Discovery Log Page 00:21:09.162 ================== 00:21:09.162 Generation Counter: 2 00:21:09.162 Number of Records: 2 00:21:09.162 Record Format: 0 00:21:09.162 00:21:09.162 Discovery Log Entry 0 00:21:09.162 ---------------------- 00:21:09.162 Transport Type: 3 (TCP) 00:21:09.162 Address Family: 1 (IPv4) 00:21:09.162 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:09.162 Entry Flags: 00:21:09.162 Duplicate Returned Information: 1 00:21:09.162 Explicit Persistent Connection Support for Discovery: 1 00:21:09.162 Transport Requirements: 00:21:09.162 Secure Channel: Not Required 00:21:09.162 Port ID: 0 (0x0000) 00:21:09.162 Controller ID: 65535 (0xffff) 00:21:09.162 Admin Max SQ Size: 128 00:21:09.162 Transport Service Identifier: 4420 00:21:09.162 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:09.162 Transport Address: 10.0.0.3 00:21:09.162 Discovery Log Entry 1 00:21:09.162 ---------------------- 00:21:09.162 Transport Type: 3 (TCP) 00:21:09.162 Address Family: 1 (IPv4) 00:21:09.162 Subsystem Type: 2 (NVM Subsystem) 00:21:09.162 Entry Flags: 00:21:09.162 Duplicate Returned Information: 0 00:21:09.162 Explicit Persistent Connection Support for Discovery: 0 00:21:09.162 Transport Requirements: 00:21:09.162 Secure Channel: Not Required 00:21:09.162 Port ID: 0 (0x0000) 00:21:09.162 Controller ID: 65535 (0xffff) 00:21:09.162 Admin Max SQ Size: 128 00:21:09.162 Transport Service Identifier: 4420 00:21:09.162 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:09.162 Transport Address: 10.0.0.3 [2024-12-06 13:14:57.612758] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:09.162 [2024-12-06 13:14:57.612776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107e600) on tqpair=0x103dd90 00:21:09.162 [2024-12-06 13:14:57.612786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.162 [2024-12-06 13:14:57.612793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107e780) on tqpair=0x103dd90 00:21:09.162 [2024-12-06 13:14:57.612798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.162 [2024-12-06 13:14:57.612804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107e900) on tqpair=0x103dd90 00:21:09.162 [2024-12-06 13:14:57.612809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.162 [2024-12-06 13:14:57.612815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.162 [2024-12-06 13:14:57.612821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.162 [2024-12-06 13:14:57.612837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.162 [2024-12-06 13:14:57.612843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.162 [2024-12-06 13:14:57.612848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.162 [2024-12-06 13:14:57.612861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.162 [2024-12-06 13:14:57.612891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.162 [2024-12-06 13:14:57.612973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.162 [2024-12-06 13:14:57.612980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.162 [2024-12-06 13:14:57.612984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.162 [2024-12-06 13:14:57.612989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.162 [2024-12-06 13:14:57.612999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.162 [2024-12-06 13:14:57.613004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.162 [2024-12-06 13:14:57.613008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.162 [2024-12-06 13:14:57.613016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.162 [2024-12-06 13:14:57.613041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.162 [2024-12-06 13:14:57.613121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.162 [2024-12-06 13:14:57.613128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.162 [2024-12-06 13:14:57.613132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.162 [2024-12-06 13:14:57.613137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.162 [2024-12-06 13:14:57.613148] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:09.162 [2024-12-06 13:14:57.613154] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:09.162 [2024-12-06 13:14:57.613165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.162 [2024-12-06 13:14:57.613170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.162 [2024-12-06 13:14:57.613174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.162 [2024-12-06 13:14:57.613182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.162 [2024-12-06 13:14:57.613204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.162 [2024-12-06 13:14:57.613259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.162 [2024-12-06 13:14:57.613266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.162 [2024-12-06 13:14:57.613270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.162 [2024-12-06 13:14:57.613274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.162 [2024-12-06 13:14:57.613286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.162 [2024-12-06 13:14:57.613291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.162 [2024-12-06 13:14:57.613295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.162 [2024-12-06 13:14:57.613303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.162 [2024-12-06 13:14:57.613321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.162 [2024-12-06 13:14:57.613372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.162 [2024-12-06 13:14:57.613379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.163 [2024-12-06 13:14:57.613383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.163 [2024-12-06 13:14:57.613399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.163 [2024-12-06 13:14:57.613416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.163 [2024-12-06 13:14:57.613434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.163 [2024-12-06 13:14:57.613504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.163 [2024-12-06 13:14:57.613513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.163 [2024-12-06 13:14:57.613517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.163 [2024-12-06 13:14:57.613533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.163 [2024-12-06 13:14:57.613561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.163 [2024-12-06 13:14:57.613582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.163 [2024-12-06 13:14:57.613634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.163 [2024-12-06 13:14:57.613641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.163 [2024-12-06 13:14:57.613645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.163 [2024-12-06 13:14:57.613660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.163 [2024-12-06 13:14:57.613677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.163 [2024-12-06 13:14:57.613695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.163 [2024-12-06 13:14:57.613746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.163 [2024-12-06 13:14:57.613753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.163 [2024-12-06 13:14:57.613757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.163 [2024-12-06 13:14:57.613772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.163 [2024-12-06 13:14:57.613789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.163 [2024-12-06 13:14:57.613821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.163 [2024-12-06 13:14:57.613876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.163 [2024-12-06 13:14:57.613892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.163 [2024-12-06 13:14:57.613897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.163 [2024-12-06 13:14:57.613913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.613922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.163 [2024-12-06 13:14:57.613930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.163 [2024-12-06 13:14:57.613950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.163 [2024-12-06 13:14:57.614003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.163 [2024-12-06 13:14:57.614010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.163 [2024-12-06 13:14:57.614014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.614018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.163 [2024-12-06 13:14:57.614029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.614033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.614037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.163 [2024-12-06 13:14:57.614045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.163 [2024-12-06 13:14:57.614063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.163 [2024-12-06 13:14:57.614131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.163 [2024-12-06 13:14:57.614138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.163 [2024-12-06 13:14:57.614142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.614146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.163 [2024-12-06 13:14:57.614157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.614161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.614165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.163 [2024-12-06 13:14:57.614173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.163 [2024-12-06 13:14:57.614191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.163 [2024-12-06 13:14:57.614242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.163 [2024-12-06 13:14:57.614250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.163 [2024-12-06 13:14:57.614254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.614258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.163 [2024-12-06 13:14:57.614269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.614274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.614278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.163 [2024-12-06 13:14:57.614286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.163 [2024-12-06 13:14:57.614304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.163 [2024-12-06 13:14:57.614358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.163 [2024-12-06 13:14:57.614365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.163 [2024-12-06 13:14:57.614369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.614373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.163 [2024-12-06 13:14:57.614384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.614388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.614393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.163 [2024-12-06 13:14:57.614400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.163 [2024-12-06 13:14:57.614418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.163 [2024-12-06 13:14:57.618473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.163 [2024-12-06 13:14:57.618495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.163 [2024-12-06 13:14:57.618500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.618505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.163 [2024-12-06 13:14:57.618520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.618526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.618530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x103dd90) 00:21:09.163 [2024-12-06 13:14:57.618540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.163 [2024-12-06 13:14:57.618567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107ea80, cid 3, qid 0 00:21:09.163 [2024-12-06 13:14:57.618632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.163 [2024-12-06 13:14:57.618639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.163 [2024-12-06 13:14:57.618643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.163 [2024-12-06 13:14:57.618647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107ea80) on tqpair=0x103dd90 00:21:09.163 [2024-12-06 13:14:57.618656] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:21:09.163 00:21:09.164 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:09.164 [2024-12-06 13:14:57.664846] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:21:09.164 [2024-12-06 13:14:57.664935] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87673 ] 00:21:09.426 [2024-12-06 13:14:57.827835] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:09.426 [2024-12-06 13:14:57.827921] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:09.426 [2024-12-06 13:14:57.827930] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:09.426 [2024-12-06 13:14:57.827946] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:09.426 [2024-12-06 13:14:57.827957] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:09.426 [2024-12-06 13:14:57.828340] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:09.426 [2024-12-06 13:14:57.828411] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x702d90 0 00:21:09.426 [2024-12-06 13:14:57.842476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:09.426 [2024-12-06 13:14:57.842502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:09.426 [2024-12-06 13:14:57.842510] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:09.427 [2024-12-06 13:14:57.842513] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:09.427 [2024-12-06 13:14:57.842552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.842560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.842565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x702d90) 00:21:09.427 [2024-12-06 13:14:57.842581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:09.427 [2024-12-06 13:14:57.842619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743600, cid 0, qid 0 00:21:09.427 [2024-12-06 13:14:57.850472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.427 [2024-12-06 13:14:57.850496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.427 [2024-12-06 13:14:57.850502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.850507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743600) on tqpair=0x702d90 00:21:09.427 [2024-12-06 13:14:57.850524] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:09.427 [2024-12-06 13:14:57.850534] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:09.427 [2024-12-06 13:14:57.850541] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:09.427 [2024-12-06 13:14:57.850563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.850569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.850573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x702d90) 00:21:09.427 [2024-12-06 13:14:57.850585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.427 [2024-12-06 13:14:57.850616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743600, cid 0, qid 0 00:21:09.427 [2024-12-06 13:14:57.851010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.427 [2024-12-06 13:14:57.851026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.427 [2024-12-06 13:14:57.851031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.851036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743600) on tqpair=0x702d90 00:21:09.427 [2024-12-06 13:14:57.851043] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:09.427 [2024-12-06 13:14:57.851052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:09.427 [2024-12-06 13:14:57.851061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.851065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.851069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x702d90) 00:21:09.427 [2024-12-06 13:14:57.851077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.427 [2024-12-06 13:14:57.851101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743600, cid 0, qid 0 00:21:09.427 [2024-12-06 13:14:57.851362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.427 [2024-12-06 13:14:57.851377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.427 [2024-12-06 13:14:57.851382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.851386] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743600) on tqpair=0x702d90 00:21:09.427 [2024-12-06 13:14:57.851393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:09.427 [2024-12-06 13:14:57.851403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:09.427 [2024-12-06 13:14:57.851411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.851416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.851420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x702d90) 00:21:09.427 [2024-12-06 13:14:57.851428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.427 [2024-12-06 13:14:57.851462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743600, cid 0, qid 0 00:21:09.427 [2024-12-06 13:14:57.851713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.427 [2024-12-06 13:14:57.851727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.427 [2024-12-06 13:14:57.851732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.851737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743600) on tqpair=0x702d90 00:21:09.427 [2024-12-06 13:14:57.851743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:09.427 [2024-12-06 13:14:57.851755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.851760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.851764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x702d90) 00:21:09.427 [2024-12-06 13:14:57.851772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.427 [2024-12-06 13:14:57.851795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743600, cid 0, qid 0 00:21:09.427 [2024-12-06 13:14:57.851945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.427 [2024-12-06 13:14:57.851960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.427 [2024-12-06 13:14:57.851964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.851969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743600) on tqpair=0x702d90 00:21:09.427 [2024-12-06 13:14:57.851974] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:09.427 [2024-12-06 13:14:57.851980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:09.427 [2024-12-06 13:14:57.851989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:09.427 [2024-12-06 13:14:57.852102] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:09.427 [2024-12-06 13:14:57.852108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:09.427 [2024-12-06 13:14:57.852119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.852123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.852127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x702d90) 00:21:09.427 [2024-12-06 13:14:57.852135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.427 [2024-12-06 13:14:57.852159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743600, cid 0, qid 0 00:21:09.427 [2024-12-06 13:14:57.852503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.427 [2024-12-06 13:14:57.852519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.427 [2024-12-06 13:14:57.852524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.852528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743600) on tqpair=0x702d90 00:21:09.427 [2024-12-06 13:14:57.852534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:09.427 [2024-12-06 13:14:57.852546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.852551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.852555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x702d90) 00:21:09.427 [2024-12-06 13:14:57.852563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.427 [2024-12-06 13:14:57.852586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743600, cid 0, qid 0 00:21:09.427 [2024-12-06 13:14:57.852793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.427 [2024-12-06 13:14:57.852807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.427 [2024-12-06 13:14:57.852812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.852816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743600) on tqpair=0x702d90 00:21:09.427 [2024-12-06 13:14:57.852821] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:09.427 [2024-12-06 13:14:57.852827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:09.427 [2024-12-06 13:14:57.852836] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:09.427 [2024-12-06 13:14:57.852848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:09.427 [2024-12-06 13:14:57.852859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.852864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x702d90) 00:21:09.427 [2024-12-06 13:14:57.852872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.427 [2024-12-06 13:14:57.852895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743600, cid 0, qid 0 00:21:09.427 [2024-12-06 13:14:57.853216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.427 [2024-12-06 13:14:57.853231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.427 [2024-12-06 13:14:57.853236] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.853240] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x702d90): datao=0, datal=4096, cccid=0 00:21:09.427 [2024-12-06 13:14:57.853245] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x743600) on tqpair(0x702d90): expected_datao=0, payload_size=4096 00:21:09.427 [2024-12-06 13:14:57.853250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.853260] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.853265] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.853275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.427 [2024-12-06 13:14:57.853281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.427 [2024-12-06 13:14:57.853285] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.427 [2024-12-06 13:14:57.853289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743600) on tqpair=0x702d90 00:21:09.428 [2024-12-06 13:14:57.853299] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:09.428 [2024-12-06 13:14:57.853305] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:09.428 [2024-12-06 13:14:57.853310] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:09.428 [2024-12-06 13:14:57.853315] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:09.428 [2024-12-06 13:14:57.853321] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:09.428 [2024-12-06 13:14:57.853326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:09.428 [2024-12-06 13:14:57.853336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:09.428 [2024-12-06 13:14:57.853345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.853350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.853354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x702d90) 00:21:09.428 [2024-12-06 13:14:57.853362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:09.428 [2024-12-06 13:14:57.853386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743600, cid 0, qid 0 00:21:09.428 [2024-12-06 13:14:57.853465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.428 [2024-12-06 13:14:57.853481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.428 [2024-12-06 13:14:57.853485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.853490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743600) on tqpair=0x702d90 00:21:09.428 [2024-12-06 13:14:57.853504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.853510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.853514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x702d90) 00:21:09.428 [2024-12-06 13:14:57.853521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.428 [2024-12-06 13:14:57.853528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.853532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.853536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x702d90) 00:21:09.428 [2024-12-06 13:14:57.853542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.428 [2024-12-06 13:14:57.853549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.853553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.853556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x702d90) 00:21:09.428 [2024-12-06 13:14:57.853562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.428 [2024-12-06 13:14:57.853569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.853573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.853577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x702d90) 00:21:09.428 [2024-12-06 13:14:57.853583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.428 [2024-12-06 13:14:57.853588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:09.428 [2024-12-06 13:14:57.853598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:09.428 [2024-12-06 13:14:57.853606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.853610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x702d90) 00:21:09.428 [2024-12-06 13:14:57.853618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.428 [2024-12-06 13:14:57.853640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743600, cid 0, qid 0 00:21:09.428 [2024-12-06 13:14:57.853648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743780, cid 1, qid 0 00:21:09.428 [2024-12-06 13:14:57.853653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743900, cid 2, qid 0 00:21:09.428 [2024-12-06 13:14:57.853659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743a80, cid 3, qid 0 00:21:09.428 [2024-12-06 13:14:57.853663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743c00, cid 4, qid 0 00:21:09.428 [2024-12-06 13:14:57.854051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.428 [2024-12-06 13:14:57.854066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.428 [2024-12-06 13:14:57.854071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.854075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743c00) on tqpair=0x702d90 00:21:09.428 [2024-12-06 13:14:57.854081] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:09.428 [2024-12-06 13:14:57.854092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:09.428 [2024-12-06 13:14:57.854103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:09.428 [2024-12-06 13:14:57.854110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:09.428 [2024-12-06 13:14:57.854118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.854123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.854126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x702d90) 00:21:09.428 [2024-12-06 13:14:57.854134] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:09.428 [2024-12-06 13:14:57.854157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743c00, cid 4, qid 0 00:21:09.428 [2024-12-06 13:14:57.854385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.428 [2024-12-06 13:14:57.854400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.428 [2024-12-06 13:14:57.854404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.854409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743c00) on tqpair=0x702d90 00:21:09.428 [2024-12-06 13:14:57.858487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:09.428 [2024-12-06 13:14:57.858518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:09.428 [2024-12-06 13:14:57.858531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.858536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x702d90) 00:21:09.428 [2024-12-06 13:14:57.858545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.428 [2024-12-06 13:14:57.858573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743c00, cid 4, qid 0 00:21:09.428 [2024-12-06 13:14:57.858806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.428 [2024-12-06 13:14:57.858822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.428 [2024-12-06 13:14:57.858826] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.858831] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x702d90): datao=0, datal=4096, cccid=4 00:21:09.428 [2024-12-06 13:14:57.858836] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x743c00) on tqpair(0x702d90): expected_datao=0, payload_size=4096 00:21:09.428 [2024-12-06 13:14:57.858841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.858851] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.858855] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.858891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.428 [2024-12-06 13:14:57.858898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.428 [2024-12-06 13:14:57.858902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.858906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743c00) on tqpair=0x702d90 00:21:09.428 [2024-12-06 13:14:57.858928] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:09.428 [2024-12-06 13:14:57.858942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:09.428 [2024-12-06 13:14:57.858954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:09.428 [2024-12-06 13:14:57.858963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.858967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x702d90) 00:21:09.428 [2024-12-06 13:14:57.858976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.428 [2024-12-06 13:14:57.859000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743c00, cid 4, qid 0 00:21:09.428 [2024-12-06 13:14:57.859322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.428 [2024-12-06 13:14:57.859340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.428 [2024-12-06 13:14:57.859345] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.859349] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x702d90): datao=0, datal=4096, cccid=4 00:21:09.428 [2024-12-06 13:14:57.859355] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x743c00) on tqpair(0x702d90): expected_datao=0, payload_size=4096 00:21:09.428 [2024-12-06 13:14:57.859360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.859367] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.859372] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.859381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.428 [2024-12-06 13:14:57.859387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.428 [2024-12-06 13:14:57.859391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.428 [2024-12-06 13:14:57.859395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743c00) on tqpair=0x702d90 00:21:09.428 [2024-12-06 13:14:57.859413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:09.428 [2024-12-06 13:14:57.859426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:09.428 [2024-12-06 13:14:57.859436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.859440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x702d90) 00:21:09.429 [2024-12-06 13:14:57.859461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.429 [2024-12-06 13:14:57.859489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743c00, cid 4, qid 0 00:21:09.429 [2024-12-06 13:14:57.859848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.429 [2024-12-06 13:14:57.859864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.429 [2024-12-06 13:14:57.859869] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.859873] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x702d90): datao=0, datal=4096, cccid=4 00:21:09.429 [2024-12-06 13:14:57.859878] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x743c00) on tqpair(0x702d90): expected_datao=0, payload_size=4096 00:21:09.429 [2024-12-06 13:14:57.859883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.859891] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.859895] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.859904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.429 [2024-12-06 13:14:57.859910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.429 [2024-12-06 13:14:57.859914] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.859918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743c00) on tqpair=0x702d90 00:21:09.429 [2024-12-06 13:14:57.859929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:09.429 [2024-12-06 13:14:57.859938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:09.429 [2024-12-06 13:14:57.859954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:09.429 [2024-12-06 13:14:57.859962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:09.429 [2024-12-06 13:14:57.859968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:09.429 [2024-12-06 13:14:57.859974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:09.429 [2024-12-06 13:14:57.859980] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:09.429 [2024-12-06 13:14:57.859985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:09.429 [2024-12-06 13:14:57.859992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:09.429 [2024-12-06 13:14:57.860013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.860018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x702d90) 00:21:09.429 [2024-12-06 13:14:57.860027] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.429 [2024-12-06 13:14:57.860035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.860039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.860043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x702d90) 00:21:09.429 [2024-12-06 13:14:57.860050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.429 [2024-12-06 13:14:57.860080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743c00, cid 4, qid 0 00:21:09.429 [2024-12-06 13:14:57.860089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743d80, cid 5, qid 0 00:21:09.429 [2024-12-06 13:14:57.860551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.429 [2024-12-06 13:14:57.860568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.429 [2024-12-06 13:14:57.860573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.860578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743c00) on tqpair=0x702d90 00:21:09.429 [2024-12-06 13:14:57.860585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.429 [2024-12-06 13:14:57.860592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.429 [2024-12-06 13:14:57.860595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.860599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743d80) on tqpair=0x702d90 00:21:09.429 [2024-12-06 13:14:57.860611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.860616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x702d90) 00:21:09.429 [2024-12-06 13:14:57.860624] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.429 [2024-12-06 13:14:57.860648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743d80, cid 5, qid 0 00:21:09.429 [2024-12-06 13:14:57.860893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.429 [2024-12-06 13:14:57.860908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.429 [2024-12-06 13:14:57.860912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.860917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743d80) on tqpair=0x702d90 00:21:09.429 [2024-12-06 13:14:57.860928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.860933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x702d90) 00:21:09.429 [2024-12-06 13:14:57.860941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.429 [2024-12-06 13:14:57.860962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743d80, cid 5, qid 0 00:21:09.429 [2024-12-06 13:14:57.861119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.429 [2024-12-06 13:14:57.861134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.429 [2024-12-06 13:14:57.861138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.861143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743d80) on tqpair=0x702d90 00:21:09.429 [2024-12-06 13:14:57.861154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.861159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x702d90) 00:21:09.429 [2024-12-06 13:14:57.861167] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.429 [2024-12-06 13:14:57.861187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743d80, cid 5, qid 0 00:21:09.429 [2024-12-06 13:14:57.861511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.429 [2024-12-06 13:14:57.861525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.429 [2024-12-06 13:14:57.861530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.861534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743d80) on tqpair=0x702d90 00:21:09.429 [2024-12-06 13:14:57.861556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.861562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x702d90) 00:21:09.429 [2024-12-06 13:14:57.861570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.429 [2024-12-06 13:14:57.861578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.861582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x702d90) 00:21:09.429 [2024-12-06 13:14:57.861589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.429 [2024-12-06 13:14:57.861598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.861602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x702d90) 00:21:09.429 [2024-12-06 13:14:57.861608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.429 [2024-12-06 13:14:57.861621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.861626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x702d90) 00:21:09.429 [2024-12-06 13:14:57.861633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.429 [2024-12-06 13:14:57.861657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743d80, cid 5, qid 0 00:21:09.429 [2024-12-06 13:14:57.861666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743c00, cid 4, qid 0 00:21:09.429 [2024-12-06 13:14:57.861671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743f00, cid 6, qid 0 00:21:09.429 [2024-12-06 13:14:57.861676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x744080, cid 7, qid 0 00:21:09.429 [2024-12-06 13:14:57.862134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.429 [2024-12-06 13:14:57.862150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.429 [2024-12-06 13:14:57.862155] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.862159] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x702d90): datao=0, datal=8192, cccid=5 00:21:09.429 [2024-12-06 13:14:57.862164] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x743d80) on tqpair(0x702d90): expected_datao=0, payload_size=8192 00:21:09.429 [2024-12-06 13:14:57.862169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.862189] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.862194] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.862201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.429 [2024-12-06 13:14:57.862207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.429 [2024-12-06 13:14:57.862211] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.862214] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x702d90): datao=0, datal=512, cccid=4 00:21:09.429 [2024-12-06 13:14:57.862219] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x743c00) on tqpair(0x702d90): expected_datao=0, payload_size=512 00:21:09.429 [2024-12-06 13:14:57.862224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.862230] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.862234] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.429 [2024-12-06 13:14:57.862240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.429 [2024-12-06 13:14:57.862246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.429 [2024-12-06 13:14:57.862250] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.430 [2024-12-06 13:14:57.862253] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x702d90): datao=0, datal=512, cccid=6 00:21:09.430 [2024-12-06 13:14:57.862258] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x743f00) on tqpair(0x702d90): expected_datao=0, payload_size=512 00:21:09.430 [2024-12-06 13:14:57.862263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.430 [2024-12-06 13:14:57.862269] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.430 [2024-12-06 13:14:57.862273] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.430 [2024-12-06 13:14:57.862280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.430 [2024-12-06 13:14:57.862286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.430 [2024-12-06 13:14:57.862290] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.430 [2024-12-06 13:14:57.862293] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x702d90): datao=0, datal=4096, cccid=7 00:21:09.430 [2024-12-06 13:14:57.862298] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x744080) on tqpair(0x702d90): expected_datao=0, payload_size=4096 00:21:09.430 [2024-12-06 13:14:57.862303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.430 [2024-12-06 13:14:57.862310] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.430 [2024-12-06 13:14:57.862314] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.430 [2024-12-06 13:14:57.862322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.430 [2024-12-06 13:14:57.862329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.430 [2024-12-06 13:14:57.862332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.430 [2024-12-06 13:14:57.862337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743d80) on tqpair=0x702d90 00:21:09.430 [2024-12-06 13:14:57.862355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.430 [2024-12-06 13:14:57.862362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.430 [2024-12-06 13:14:57.862366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.430 [2024-12-06 13:14:57.862370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743c00) on tqpair=0x702d90 00:21:09.430 [2024-12-06 13:14:57.862383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.430 [2024-12-06 13:14:57.862390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.430 [2024-12-06 13:14:57.862393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.430 [2024-12-06 13:14:57.862397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743f00) on tqpair=0x702d90 00:21:09.430 [2024-12-06 13:14:57.862405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.430 [2024-12-06 13:14:57.862411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.430 [2024-12-06 13:14:57.862415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.430 [2024-12-06 13:14:57.862419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x744080) on tqpair=0x702d90 00:21:09.430 ===================================================== 00:21:09.430 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:09.430 ===================================================== 00:21:09.430 Controller Capabilities/Features 00:21:09.430 ================================ 00:21:09.430 Vendor ID: 8086 00:21:09.430 Subsystem Vendor ID: 8086 00:21:09.430 Serial Number: SPDK00000000000001 00:21:09.430 Model Number: SPDK bdev Controller 00:21:09.430 Firmware Version: 25.01 00:21:09.430 Recommended Arb Burst: 6 00:21:09.430 IEEE OUI Identifier: e4 d2 5c 00:21:09.430 Multi-path I/O 00:21:09.430 May have multiple subsystem ports: Yes 00:21:09.430 May have multiple controllers: Yes 00:21:09.430 Associated with SR-IOV VF: No 00:21:09.430 Max Data Transfer Size: 131072 00:21:09.430 Max Number of Namespaces: 32 00:21:09.430 Max Number of I/O Queues: 127 00:21:09.430 NVMe Specification Version (VS): 1.3 00:21:09.430 NVMe Specification Version (Identify): 1.3 00:21:09.430 Maximum Queue Entries: 128 00:21:09.430 Contiguous Queues Required: Yes 00:21:09.430 Arbitration Mechanisms Supported 00:21:09.430 Weighted Round Robin: Not Supported 00:21:09.430 Vendor Specific: Not Supported 00:21:09.430 Reset Timeout: 15000 ms 00:21:09.430 Doorbell Stride: 4 bytes 00:21:09.430 NVM Subsystem Reset: Not Supported 00:21:09.430 Command Sets Supported 00:21:09.430 NVM Command Set: Supported 00:21:09.430 Boot Partition: Not Supported 00:21:09.430 Memory Page Size Minimum: 4096 bytes 00:21:09.430 Memory Page Size Maximum: 4096 bytes 00:21:09.430 Persistent Memory Region: Not Supported 00:21:09.430 Optional Asynchronous Events Supported 00:21:09.430 Namespace Attribute Notices: Supported 00:21:09.430 Firmware Activation Notices: Not Supported 00:21:09.430 ANA Change Notices: Not Supported 00:21:09.430 PLE Aggregate Log Change Notices: Not Supported 00:21:09.430 LBA Status Info Alert Notices: Not Supported 00:21:09.430 EGE Aggregate Log Change Notices: Not Supported 00:21:09.430 Normal NVM Subsystem Shutdown event: Not Supported 00:21:09.430 Zone Descriptor Change Notices: Not Supported 00:21:09.430 Discovery Log Change Notices: Not Supported 00:21:09.430 Controller Attributes 00:21:09.430 128-bit Host Identifier: Supported 00:21:09.430 Non-Operational Permissive Mode: Not Supported 00:21:09.430 NVM Sets: Not Supported 00:21:09.430 Read Recovery Levels: Not Supported 00:21:09.430 Endurance Groups: Not Supported 00:21:09.430 Predictable Latency Mode: Not Supported 00:21:09.430 Traffic Based Keep ALive: Not Supported 00:21:09.430 Namespace Granularity: Not Supported 00:21:09.430 SQ Associations: Not Supported 00:21:09.430 UUID List: Not Supported 00:21:09.430 Multi-Domain Subsystem: Not Supported 00:21:09.430 Fixed Capacity Management: Not Supported 00:21:09.430 Variable Capacity Management: Not Supported 00:21:09.430 Delete Endurance Group: Not Supported 00:21:09.430 Delete NVM Set: Not Supported 00:21:09.430 Extended LBA Formats Supported: Not Supported 00:21:09.430 Flexible Data Placement Supported: Not Supported 00:21:09.430 00:21:09.430 Controller Memory Buffer Support 00:21:09.430 ================================ 00:21:09.430 Supported: No 00:21:09.430 00:21:09.430 Persistent Memory Region Support 00:21:09.430 ================================ 00:21:09.430 Supported: No 00:21:09.430 00:21:09.430 Admin Command Set Attributes 00:21:09.430 ============================ 00:21:09.430 Security Send/Receive: Not Supported 00:21:09.430 Format NVM: Not Supported 00:21:09.430 Firmware Activate/Download: Not Supported 00:21:09.430 Namespace Management: Not Supported 00:21:09.430 Device Self-Test: Not Supported 00:21:09.430 Directives: Not Supported 00:21:09.430 NVMe-MI: Not Supported 00:21:09.430 Virtualization Management: Not Supported 00:21:09.430 Doorbell Buffer Config: Not Supported 00:21:09.430 Get LBA Status Capability: Not Supported 00:21:09.430 Command & Feature Lockdown Capability: Not Supported 00:21:09.430 Abort Command Limit: 4 00:21:09.430 Async Event Request Limit: 4 00:21:09.430 Number of Firmware Slots: N/A 00:21:09.430 Firmware Slot 1 Read-Only: N/A 00:21:09.430 Firmware Activation Without Reset: N/A 00:21:09.430 Multiple Update Detection Support: N/A 00:21:09.430 Firmware Update Granularity: No Information Provided 00:21:09.430 Per-Namespace SMART Log: No 00:21:09.430 Asymmetric Namespace Access Log Page: Not Supported 00:21:09.430 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:09.430 Command Effects Log Page: Supported 00:21:09.430 Get Log Page Extended Data: Supported 00:21:09.430 Telemetry Log Pages: Not Supported 00:21:09.430 Persistent Event Log Pages: Not Supported 00:21:09.430 Supported Log Pages Log Page: May Support 00:21:09.430 Commands Supported & Effects Log Page: Not Supported 00:21:09.430 Feature Identifiers & Effects Log Page:May Support 00:21:09.430 NVMe-MI Commands & Effects Log Page: May Support 00:21:09.430 Data Area 4 for Telemetry Log: Not Supported 00:21:09.430 Error Log Page Entries Supported: 128 00:21:09.430 Keep Alive: Supported 00:21:09.430 Keep Alive Granularity: 10000 ms 00:21:09.430 00:21:09.430 NVM Command Set Attributes 00:21:09.430 ========================== 00:21:09.430 Submission Queue Entry Size 00:21:09.430 Max: 64 00:21:09.430 Min: 64 00:21:09.430 Completion Queue Entry Size 00:21:09.430 Max: 16 00:21:09.430 Min: 16 00:21:09.430 Number of Namespaces: 32 00:21:09.430 Compare Command: Supported 00:21:09.430 Write Uncorrectable Command: Not Supported 00:21:09.430 Dataset Management Command: Supported 00:21:09.430 Write Zeroes Command: Supported 00:21:09.430 Set Features Save Field: Not Supported 00:21:09.430 Reservations: Supported 00:21:09.430 Timestamp: Not Supported 00:21:09.430 Copy: Supported 00:21:09.430 Volatile Write Cache: Present 00:21:09.430 Atomic Write Unit (Normal): 1 00:21:09.430 Atomic Write Unit (PFail): 1 00:21:09.430 Atomic Compare & Write Unit: 1 00:21:09.430 Fused Compare & Write: Supported 00:21:09.430 Scatter-Gather List 00:21:09.430 SGL Command Set: Supported 00:21:09.430 SGL Keyed: Supported 00:21:09.430 SGL Bit Bucket Descriptor: Not Supported 00:21:09.430 SGL Metadata Pointer: Not Supported 00:21:09.430 Oversized SGL: Not Supported 00:21:09.430 SGL Metadata Address: Not Supported 00:21:09.430 SGL Offset: Supported 00:21:09.430 Transport SGL Data Block: Not Supported 00:21:09.430 Replay Protected Memory Block: Not Supported 00:21:09.430 00:21:09.430 Firmware Slot Information 00:21:09.430 ========================= 00:21:09.430 Active slot: 1 00:21:09.431 Slot 1 Firmware Revision: 25.01 00:21:09.431 00:21:09.431 00:21:09.431 Commands Supported and Effects 00:21:09.431 ============================== 00:21:09.431 Admin Commands 00:21:09.431 -------------- 00:21:09.431 Get Log Page (02h): Supported 00:21:09.431 Identify (06h): Supported 00:21:09.431 Abort (08h): Supported 00:21:09.431 Set Features (09h): Supported 00:21:09.431 Get Features (0Ah): Supported 00:21:09.431 Asynchronous Event Request (0Ch): Supported 00:21:09.431 Keep Alive (18h): Supported 00:21:09.431 I/O Commands 00:21:09.431 ------------ 00:21:09.431 Flush (00h): Supported LBA-Change 00:21:09.431 Write (01h): Supported LBA-Change 00:21:09.431 Read (02h): Supported 00:21:09.431 Compare (05h): Supported 00:21:09.431 Write Zeroes (08h): Supported LBA-Change 00:21:09.431 Dataset Management (09h): Supported LBA-Change 00:21:09.431 Copy (19h): Supported LBA-Change 00:21:09.431 00:21:09.431 Error Log 00:21:09.431 ========= 00:21:09.431 00:21:09.431 Arbitration 00:21:09.431 =========== 00:21:09.431 Arbitration Burst: 1 00:21:09.431 00:21:09.431 Power Management 00:21:09.431 ================ 00:21:09.431 Number of Power States: 1 00:21:09.431 Current Power State: Power State #0 00:21:09.431 Power State #0: 00:21:09.431 Max Power: 0.00 W 00:21:09.431 Non-Operational State: Operational 00:21:09.431 Entry Latency: Not Reported 00:21:09.431 Exit Latency: Not Reported 00:21:09.431 Relative Read Throughput: 0 00:21:09.431 Relative Read Latency: 0 00:21:09.431 Relative Write Throughput: 0 00:21:09.431 Relative Write Latency: 0 00:21:09.431 Idle Power: Not Reported 00:21:09.431 Active Power: Not Reported 00:21:09.431 Non-Operational Permissive Mode: Not Supported 00:21:09.431 00:21:09.431 Health Information 00:21:09.431 ================== 00:21:09.431 Critical Warnings: 00:21:09.431 Available Spare Space: OK 00:21:09.431 Temperature: OK 00:21:09.431 Device Reliability: OK 00:21:09.431 Read Only: No 00:21:09.431 Volatile Memory Backup: OK 00:21:09.431 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:09.431 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:09.431 Available Spare: 0% 00:21:09.431 Available Spare Threshold: 0% 00:21:09.431 Life Percentage Used:[2024-12-06 13:14:57.866572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.866590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x702d90) 00:21:09.431 [2024-12-06 13:14:57.866603] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.431 [2024-12-06 13:14:57.866640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x744080, cid 7, qid 0 00:21:09.431 [2024-12-06 13:14:57.867041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.431 [2024-12-06 13:14:57.867059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.431 [2024-12-06 13:14:57.867065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.867070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x744080) on tqpair=0x702d90 00:21:09.431 [2024-12-06 13:14:57.867118] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:09.431 [2024-12-06 13:14:57.867132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743600) on tqpair=0x702d90 00:21:09.431 [2024-12-06 13:14:57.867139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.431 [2024-12-06 13:14:57.867145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743780) on tqpair=0x702d90 00:21:09.431 [2024-12-06 13:14:57.867151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.431 [2024-12-06 13:14:57.867156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743900) on tqpair=0x702d90 00:21:09.431 [2024-12-06 13:14:57.867162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.431 [2024-12-06 13:14:57.867167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743a80) on tqpair=0x702d90 00:21:09.431 [2024-12-06 13:14:57.867172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.431 [2024-12-06 13:14:57.867184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.867189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.867193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x702d90) 00:21:09.431 [2024-12-06 13:14:57.867201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.431 [2024-12-06 13:14:57.867227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743a80, cid 3, qid 0 00:21:09.431 [2024-12-06 13:14:57.867516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.431 [2024-12-06 13:14:57.867533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.431 [2024-12-06 13:14:57.867538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.867542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743a80) on tqpair=0x702d90 00:21:09.431 [2024-12-06 13:14:57.867552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.867557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.867561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x702d90) 00:21:09.431 [2024-12-06 13:14:57.867569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.431 [2024-12-06 13:14:57.867596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743a80, cid 3, qid 0 00:21:09.431 [2024-12-06 13:14:57.867860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.431 [2024-12-06 13:14:57.867874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.431 [2024-12-06 13:14:57.867879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.867883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743a80) on tqpair=0x702d90 00:21:09.431 [2024-12-06 13:14:57.867889] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:09.431 [2024-12-06 13:14:57.867895] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:09.431 [2024-12-06 13:14:57.867906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.867911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.867915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x702d90) 00:21:09.431 [2024-12-06 13:14:57.867923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.431 [2024-12-06 13:14:57.867944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743a80, cid 3, qid 0 00:21:09.431 [2024-12-06 13:14:57.868179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.431 [2024-12-06 13:14:57.868194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.431 [2024-12-06 13:14:57.868198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.868203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743a80) on tqpair=0x702d90 00:21:09.431 [2024-12-06 13:14:57.868215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.868220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.868224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x702d90) 00:21:09.431 [2024-12-06 13:14:57.868231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.431 [2024-12-06 13:14:57.868252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743a80, cid 3, qid 0 00:21:09.431 [2024-12-06 13:14:57.868496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.431 [2024-12-06 13:14:57.868509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.431 [2024-12-06 13:14:57.868513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.868517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743a80) on tqpair=0x702d90 00:21:09.431 [2024-12-06 13:14:57.868529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.868534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.431 [2024-12-06 13:14:57.868539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x702d90) 00:21:09.431 [2024-12-06 13:14:57.868547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.432 [2024-12-06 13:14:57.868569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743a80, cid 3, qid 0 00:21:09.432 [2024-12-06 13:14:57.868730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.432 [2024-12-06 13:14:57.868742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.432 [2024-12-06 13:14:57.868746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.868751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743a80) on tqpair=0x702d90 00:21:09.432 [2024-12-06 13:14:57.868762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.868767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.868771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x702d90) 00:21:09.432 [2024-12-06 13:14:57.868778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.432 [2024-12-06 13:14:57.868799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743a80, cid 3, qid 0 00:21:09.432 [2024-12-06 13:14:57.868865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.432 [2024-12-06 13:14:57.868872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.432 [2024-12-06 13:14:57.868876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.868880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743a80) on tqpair=0x702d90 00:21:09.432 [2024-12-06 13:14:57.868891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.868896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.868899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x702d90) 00:21:09.432 [2024-12-06 13:14:57.868907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.432 [2024-12-06 13:14:57.868935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743a80, cid 3, qid 0 00:21:09.432 [2024-12-06 13:14:57.869224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.432 [2024-12-06 13:14:57.869238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.432 [2024-12-06 13:14:57.869243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.869247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743a80) on tqpair=0x702d90 00:21:09.432 [2024-12-06 13:14:57.869259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.869263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.869267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x702d90) 00:21:09.432 [2024-12-06 13:14:57.869275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.432 [2024-12-06 13:14:57.869295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743a80, cid 3, qid 0 00:21:09.432 [2024-12-06 13:14:57.869534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.432 [2024-12-06 13:14:57.869547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.432 [2024-12-06 13:14:57.869551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.869555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743a80) on tqpair=0x702d90 00:21:09.432 [2024-12-06 13:14:57.869567] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.869572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.869576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x702d90) 00:21:09.432 [2024-12-06 13:14:57.869584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.432 [2024-12-06 13:14:57.869605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743a80, cid 3, qid 0 00:21:09.432 [2024-12-06 13:14:57.869831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.432 [2024-12-06 13:14:57.869843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.432 [2024-12-06 13:14:57.869847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.869851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743a80) on tqpair=0x702d90 00:21:09.432 [2024-12-06 13:14:57.869863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.869868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.869872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x702d90) 00:21:09.432 [2024-12-06 13:14:57.869879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.432 [2024-12-06 13:14:57.869900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743a80, cid 3, qid 0 00:21:09.432 [2024-12-06 13:14:57.870143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.432 [2024-12-06 13:14:57.870154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.432 [2024-12-06 13:14:57.870159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.870163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743a80) on tqpair=0x702d90 00:21:09.432 [2024-12-06 13:14:57.870174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.870179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.870183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x702d90) 00:21:09.432 [2024-12-06 13:14:57.870191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.432 [2024-12-06 13:14:57.870210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743a80, cid 3, qid 0 00:21:09.432 [2024-12-06 13:14:57.870444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.432 [2024-12-06 13:14:57.874479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.432 [2024-12-06 13:14:57.874493] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.874498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743a80) on tqpair=0x702d90 00:21:09.432 [2024-12-06 13:14:57.874520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.874527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.874531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x702d90) 00:21:09.432 [2024-12-06 13:14:57.874541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.432 [2024-12-06 13:14:57.874577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x743a80, cid 3, qid 0 00:21:09.432 [2024-12-06 13:14:57.874854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.432 [2024-12-06 13:14:57.874869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.432 [2024-12-06 13:14:57.874874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.432 [2024-12-06 13:14:57.874878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x743a80) on tqpair=0x702d90 00:21:09.432 [2024-12-06 13:14:57.874887] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:21:09.432 0% 00:21:09.432 Data Units Read: 0 00:21:09.432 Data Units Written: 0 00:21:09.432 Host Read Commands: 0 00:21:09.432 Host Write Commands: 0 00:21:09.432 Controller Busy Time: 0 minutes 00:21:09.432 Power Cycles: 0 00:21:09.432 Power On Hours: 0 hours 00:21:09.432 Unsafe Shutdowns: 0 00:21:09.432 Unrecoverable Media Errors: 0 00:21:09.432 Lifetime Error Log Entries: 0 00:21:09.432 Warning Temperature Time: 0 minutes 00:21:09.432 Critical Temperature Time: 0 minutes 00:21:09.432 00:21:09.432 Number of Queues 00:21:09.432 ================ 00:21:09.432 Number of I/O Submission Queues: 127 00:21:09.432 Number of I/O Completion Queues: 127 00:21:09.432 00:21:09.432 Active Namespaces 00:21:09.432 ================= 00:21:09.432 Namespace ID:1 00:21:09.432 Error Recovery Timeout: Unlimited 00:21:09.432 Command Set Identifier: NVM (00h) 00:21:09.432 Deallocate: Supported 00:21:09.432 Deallocated/Unwritten Error: Not Supported 00:21:09.432 Deallocated Read Value: Unknown 00:21:09.432 Deallocate in Write Zeroes: Not Supported 00:21:09.432 Deallocated Guard Field: 0xFFFF 00:21:09.432 Flush: Supported 00:21:09.432 Reservation: Supported 00:21:09.432 Namespace Sharing Capabilities: Multiple Controllers 00:21:09.432 Size (in LBAs): 131072 (0GiB) 00:21:09.432 Capacity (in LBAs): 131072 (0GiB) 00:21:09.432 Utilization (in LBAs): 131072 (0GiB) 00:21:09.432 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:09.432 EUI64: ABCDEF0123456789 00:21:09.432 UUID: 1d91d439-dd49-4a18-8bdb-326d8a1929a7 00:21:09.432 Thin Provisioning: Not Supported 00:21:09.432 Per-NS Atomic Units: Yes 00:21:09.432 Atomic Boundary Size (Normal): 0 00:21:09.432 Atomic Boundary Size (PFail): 0 00:21:09.432 Atomic Boundary Offset: 0 00:21:09.432 Maximum Single Source Range Length: 65535 00:21:09.432 Maximum Copy Length: 65535 00:21:09.432 Maximum Source Range Count: 1 00:21:09.432 NGUID/EUI64 Never Reused: No 00:21:09.432 Namespace Write Protected: No 00:21:09.432 Number of LBA Formats: 1 00:21:09.432 Current LBA Format: LBA Format #00 00:21:09.432 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:09.432 00:21:09.432 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:09.432 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.432 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.432 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:09.432 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.432 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:09.432 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:09.432 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:09.432 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:09.432 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:09.432 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:09.432 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:09.432 13:14:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:09.432 rmmod nvme_tcp 00:21:09.433 rmmod nvme_fabrics 00:21:09.433 rmmod nvme_keyring 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 87612 ']' 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 87612 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 87612 ']' 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 87612 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87612 00:21:09.433 killing process with pid 87612 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87612' 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 87612 00:21:09.433 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 87612 00:21:09.691 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:09.691 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:09.691 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:09.691 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:09.691 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:09.691 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:09.691 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:09.691 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:09.691 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:09.691 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:09.691 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:09.691 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:21:09.951 00:21:09.951 real 0m3.188s 00:21:09.951 user 0m8.238s 00:21:09.951 sys 0m0.805s 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.951 13:14:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:09.951 ************************************ 00:21:09.951 END TEST nvmf_identify 00:21:09.951 ************************************ 00:21:10.210 13:14:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:10.210 13:14:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.210 13:14:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.210 13:14:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.210 ************************************ 00:21:10.210 START TEST nvmf_perf 00:21:10.210 ************************************ 00:21:10.210 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:10.210 * Looking for test storage... 00:21:10.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:10.210 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:10.210 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:10.210 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.211 --rc genhtml_branch_coverage=1 00:21:10.211 --rc genhtml_function_coverage=1 00:21:10.211 --rc genhtml_legend=1 00:21:10.211 --rc geninfo_all_blocks=1 00:21:10.211 --rc geninfo_unexecuted_blocks=1 00:21:10.211 00:21:10.211 ' 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.211 --rc genhtml_branch_coverage=1 00:21:10.211 --rc genhtml_function_coverage=1 00:21:10.211 --rc genhtml_legend=1 00:21:10.211 --rc geninfo_all_blocks=1 00:21:10.211 --rc geninfo_unexecuted_blocks=1 00:21:10.211 00:21:10.211 ' 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.211 --rc genhtml_branch_coverage=1 00:21:10.211 --rc genhtml_function_coverage=1 00:21:10.211 --rc genhtml_legend=1 00:21:10.211 --rc geninfo_all_blocks=1 00:21:10.211 --rc geninfo_unexecuted_blocks=1 00:21:10.211 00:21:10.211 ' 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.211 --rc genhtml_branch_coverage=1 00:21:10.211 --rc genhtml_function_coverage=1 00:21:10.211 --rc genhtml_legend=1 00:21:10.211 --rc geninfo_all_blocks=1 00:21:10.211 --rc geninfo_unexecuted_blocks=1 00:21:10.211 00:21:10.211 ' 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.211 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:10.211 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:10.212 Cannot find device "nvmf_init_br" 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:21:10.212 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:10.470 Cannot find device "nvmf_init_br2" 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:10.470 Cannot find device "nvmf_tgt_br" 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:10.470 Cannot find device "nvmf_tgt_br2" 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:10.470 Cannot find device "nvmf_init_br" 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:10.470 Cannot find device "nvmf_init_br2" 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:10.470 Cannot find device "nvmf_tgt_br" 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:10.470 Cannot find device "nvmf_tgt_br2" 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:10.470 Cannot find device "nvmf_br" 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:10.470 Cannot find device "nvmf_init_if" 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:10.470 Cannot find device "nvmf_init_if2" 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:10.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:10.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:10.470 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:10.470 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:10.470 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:10.470 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:10.470 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:10.470 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:10.470 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:10.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:10.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:21:10.729 00:21:10.729 --- 10.0.0.3 ping statistics --- 00:21:10.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.729 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:10.729 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:10.729 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:21:10.729 00:21:10.729 --- 10.0.0.4 ping statistics --- 00:21:10.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.729 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:10.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:21:10.729 00:21:10.729 --- 10.0.0.1 ping statistics --- 00:21:10.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.729 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:10.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:21:10.729 00:21:10.729 --- 10.0.0.2 ping statistics --- 00:21:10.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.729 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=87890 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 87890 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 87890 ']' 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.729 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:10.988 [2024-12-06 13:14:59.387714] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:21:10.988 [2024-12-06 13:14:59.387873] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.988 [2024-12-06 13:14:59.551413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.988 [2024-12-06 13:14:59.624821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.988 [2024-12-06 13:14:59.624899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.988 [2024-12-06 13:14:59.624923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.988 [2024-12-06 13:14:59.624941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.988 [2024-12-06 13:14:59.624956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.988 [2024-12-06 13:14:59.626355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.248 [2024-12-06 13:14:59.626417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.248 [2024-12-06 13:14:59.626498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.248 [2024-12-06 13:14:59.626503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.814 13:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.814 13:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:11.814 13:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.814 13:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.814 13:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:11.815 13:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.815 13:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:11.815 13:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:21:12.381 13:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:21:12.382 13:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:12.640 13:15:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:21:12.640 13:15:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:12.898 13:15:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:12.898 13:15:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:21:12.898 13:15:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:12.898 13:15:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:12.898 13:15:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:13.157 [2024-12-06 13:15:01.758289] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.157 13:15:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:13.415 13:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:13.415 13:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:13.982 13:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:13.982 13:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:14.240 13:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:14.498 [2024-12-06 13:15:02.936013] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:14.498 13:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:14.756 13:15:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:14.756 13:15:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:14.756 13:15:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:14.756 13:15:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:16.130 Initializing NVMe Controllers 00:21:16.130 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:16.130 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:16.130 Initialization complete. Launching workers. 00:21:16.130 ======================================================== 00:21:16.130 Latency(us) 00:21:16.130 Device Information : IOPS MiB/s Average min max 00:21:16.130 PCIE (0000:00:10.0) NSID 1 from core 0: 24192.00 94.50 1322.66 287.68 5210.04 00:21:16.130 ======================================================== 00:21:16.130 Total : 24192.00 94.50 1322.66 287.68 5210.04 00:21:16.130 00:21:16.130 13:15:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:17.071 Initializing NVMe Controllers 00:21:17.071 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:17.071 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:17.071 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:17.071 Initialization complete. Launching workers. 00:21:17.071 ======================================================== 00:21:17.071 Latency(us) 00:21:17.071 Device Information : IOPS MiB/s Average min max 00:21:17.071 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3367.00 13.15 296.63 118.05 7094.99 00:21:17.071 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8186.35 7045.55 12033.74 00:21:17.071 ======================================================== 00:21:17.071 Total : 3490.00 13.63 574.69 118.05 12033.74 00:21:17.071 00:21:17.340 13:15:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:18.716 Initializing NVMe Controllers 00:21:18.717 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.717 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:18.717 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:18.717 Initialization complete. Launching workers. 00:21:18.717 ======================================================== 00:21:18.717 Latency(us) 00:21:18.717 Device Information : IOPS MiB/s Average min max 00:21:18.717 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7877.70 30.77 4062.21 815.06 20818.02 00:21:18.717 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2676.54 10.46 12070.65 5624.41 28249.26 00:21:18.717 ======================================================== 00:21:18.717 Total : 10554.24 41.23 6093.14 815.06 28249.26 00:21:18.717 00:21:18.717 13:15:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:21:18.717 13:15:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:21.249 Initializing NVMe Controllers 00:21:21.249 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:21.249 Controller IO queue size 128, less than required. 00:21:21.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:21.249 Controller IO queue size 128, less than required. 00:21:21.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:21.249 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:21.249 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:21.249 Initialization complete. Launching workers. 00:21:21.249 ======================================================== 00:21:21.249 Latency(us) 00:21:21.249 Device Information : IOPS MiB/s Average min max 00:21:21.249 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1291.48 322.87 100921.96 58009.66 209955.98 00:21:21.249 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 554.49 138.62 244300.19 105867.89 346730.43 00:21:21.249 ======================================================== 00:21:21.249 Total : 1845.97 461.49 143989.80 58009.66 346730.43 00:21:21.249 00:21:21.249 13:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:21:21.507 Initializing NVMe Controllers 00:21:21.507 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:21.507 Controller IO queue size 128, less than required. 00:21:21.507 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:21.507 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:21.507 Controller IO queue size 128, less than required. 00:21:21.507 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:21.507 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:21:21.507 WARNING: Some requested NVMe devices were skipped 00:21:21.507 No valid NVMe controllers or AIO or URING devices found 00:21:21.507 13:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:21:24.032 Initializing NVMe Controllers 00:21:24.032 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:24.032 Controller IO queue size 128, less than required. 00:21:24.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.032 Controller IO queue size 128, less than required. 00:21:24.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.032 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:24.032 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:24.032 Initialization complete. Launching workers. 00:21:24.032 00:21:24.032 ==================== 00:21:24.032 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:24.032 TCP transport: 00:21:24.032 polls: 8115 00:21:24.032 idle_polls: 4385 00:21:24.032 sock_completions: 3730 00:21:24.032 nvme_completions: 4697 00:21:24.032 submitted_requests: 7060 00:21:24.032 queued_requests: 1 00:21:24.032 00:21:24.032 ==================== 00:21:24.032 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:24.032 TCP transport: 00:21:24.032 polls: 8282 00:21:24.032 idle_polls: 5514 00:21:24.032 sock_completions: 2768 00:21:24.032 nvme_completions: 5271 00:21:24.032 submitted_requests: 7848 00:21:24.032 queued_requests: 1 00:21:24.032 ======================================================== 00:21:24.032 Latency(us) 00:21:24.032 Device Information : IOPS MiB/s Average min max 00:21:24.032 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1173.12 293.28 114792.35 45959.63 249813.30 00:21:24.032 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1316.51 329.13 99302.63 41886.85 291593.41 00:21:24.032 ======================================================== 00:21:24.032 Total : 2489.63 622.41 106601.41 41886.85 291593.41 00:21:24.032 00:21:24.032 13:15:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:24.288 13:15:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:24.547 rmmod nvme_tcp 00:21:24.547 rmmod nvme_fabrics 00:21:24.547 rmmod nvme_keyring 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 87890 ']' 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 87890 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 87890 ']' 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 87890 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.547 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87890 00:21:24.805 killing process with pid 87890 00:21:24.805 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.805 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.805 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87890' 00:21:24.805 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 87890 00:21:24.805 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 87890 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:25.371 13:15:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:25.371 13:15:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:21:25.628 ************************************ 00:21:25.628 END TEST nvmf_perf 00:21:25.628 ************************************ 00:21:25.628 00:21:25.628 real 0m15.497s 00:21:25.628 user 0m56.275s 00:21:25.628 sys 0m3.727s 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.628 ************************************ 00:21:25.628 START TEST nvmf_fio_host 00:21:25.628 ************************************ 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:25.628 * Looking for test storage... 00:21:25.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:25.628 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:25.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.887 --rc genhtml_branch_coverage=1 00:21:25.887 --rc genhtml_function_coverage=1 00:21:25.887 --rc genhtml_legend=1 00:21:25.887 --rc geninfo_all_blocks=1 00:21:25.887 --rc geninfo_unexecuted_blocks=1 00:21:25.887 00:21:25.887 ' 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:25.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.887 --rc genhtml_branch_coverage=1 00:21:25.887 --rc genhtml_function_coverage=1 00:21:25.887 --rc genhtml_legend=1 00:21:25.887 --rc geninfo_all_blocks=1 00:21:25.887 --rc geninfo_unexecuted_blocks=1 00:21:25.887 00:21:25.887 ' 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:25.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.887 --rc genhtml_branch_coverage=1 00:21:25.887 --rc genhtml_function_coverage=1 00:21:25.887 --rc genhtml_legend=1 00:21:25.887 --rc geninfo_all_blocks=1 00:21:25.887 --rc geninfo_unexecuted_blocks=1 00:21:25.887 00:21:25.887 ' 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:25.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.887 --rc genhtml_branch_coverage=1 00:21:25.887 --rc genhtml_function_coverage=1 00:21:25.887 --rc genhtml_legend=1 00:21:25.887 --rc geninfo_all_blocks=1 00:21:25.887 --rc geninfo_unexecuted_blocks=1 00:21:25.887 00:21:25.887 ' 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.887 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:25.888 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:25.888 Cannot find device "nvmf_init_br" 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:25.888 Cannot find device "nvmf_init_br2" 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:25.888 Cannot find device "nvmf_tgt_br" 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:25.888 Cannot find device "nvmf_tgt_br2" 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:25.888 Cannot find device "nvmf_init_br" 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:25.888 Cannot find device "nvmf_init_br2" 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:25.888 Cannot find device "nvmf_tgt_br" 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:21:25.888 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:26.146 Cannot find device "nvmf_tgt_br2" 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:26.146 Cannot find device "nvmf_br" 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:26.146 Cannot find device "nvmf_init_if" 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:26.146 Cannot find device "nvmf_init_if2" 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:26.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:26.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:26.146 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:26.147 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:26.405 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:26.405 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:21:26.405 00:21:26.405 --- 10.0.0.3 ping statistics --- 00:21:26.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.405 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:26.405 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:26.405 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:21:26.405 00:21:26.405 --- 10.0.0.4 ping statistics --- 00:21:26.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.405 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:26.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:26.405 00:21:26.405 --- 10.0.0.1 ping statistics --- 00:21:26.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.405 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:26.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:21:26.405 00:21:26.405 --- 10.0.0.2 ping statistics --- 00:21:26.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.405 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=88425 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 88425 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 88425 ']' 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.405 13:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.406 [2024-12-06 13:15:14.891844] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:21:26.406 [2024-12-06 13:15:14.892155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.663 [2024-12-06 13:15:15.048577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.663 [2024-12-06 13:15:15.123220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.663 [2024-12-06 13:15:15.123280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.663 [2024-12-06 13:15:15.123305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.663 [2024-12-06 13:15:15.123316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.663 [2024-12-06 13:15:15.123325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.663 [2024-12-06 13:15:15.124614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.663 [2024-12-06 13:15:15.125180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.663 [2024-12-06 13:15:15.125324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.663 [2024-12-06 13:15:15.125332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.663 13:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.663 13:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:26.663 13:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:27.227 [2024-12-06 13:15:15.559366] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.227 13:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:27.227 13:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:27.227 13:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.227 13:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:27.485 Malloc1 00:21:27.485 13:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.742 13:15:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:28.000 13:15:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:28.258 [2024-12-06 13:15:16.835072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:28.258 13:15:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:28.826 13:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:28.826 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:28.826 fio-3.35 00:21:28.826 Starting 1 thread 00:21:31.366 00:21:31.366 test: (groupid=0, jobs=1): err= 0: pid=88547: Fri Dec 6 13:15:19 2024 00:21:31.366 read: IOPS=8898, BW=34.8MiB/s (36.4MB/s)(69.8MiB/2007msec) 00:21:31.366 slat (nsec): min=1984, max=317616, avg=2377.20, stdev=3122.04 00:21:31.366 clat (usec): min=3134, max=13455, avg=7517.27, stdev=586.19 00:21:31.366 lat (usec): min=3177, max=13457, avg=7519.65, stdev=585.95 00:21:31.366 clat percentiles (usec): 00:21:31.366 | 1.00th=[ 6456], 5.00th=[ 6783], 10.00th=[ 6915], 20.00th=[ 7111], 00:21:31.366 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7570], 00:21:31.366 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8094], 95.00th=[ 8291], 00:21:31.366 | 99.00th=[ 8717], 99.50th=[10814], 99.90th=[12780], 99.95th=[12911], 00:21:31.366 | 99.99th=[13435] 00:21:31.366 bw ( KiB/s): min=34552, max=36368, per=99.98%, avg=35590.00, stdev=758.23, samples=4 00:21:31.366 iops : min= 8638, max= 9092, avg=8898.00, stdev=189.73, samples=4 00:21:31.366 write: IOPS=8913, BW=34.8MiB/s (36.5MB/s)(69.9MiB/2007msec); 0 zone resets 00:21:31.366 slat (usec): min=2, max=247, avg= 2.44, stdev= 2.07 00:21:31.366 clat (usec): min=2299, max=13322, avg=6806.89, stdev=532.72 00:21:31.366 lat (usec): min=2312, max=13324, avg=6809.33, stdev=532.55 00:21:31.366 clat percentiles (usec): 00:21:31.366 | 1.00th=[ 5800], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6456], 00:21:31.366 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6783], 60.00th=[ 6915], 00:21:31.366 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7439], 00:21:31.366 | 99.00th=[ 7767], 99.50th=[ 9634], 99.90th=[11863], 99.95th=[12387], 00:21:31.366 | 99.99th=[13173] 00:21:31.366 bw ( KiB/s): min=35352, max=35840, per=100.00%, avg=35654.00, stdev=210.18, samples=4 00:21:31.366 iops : min= 8838, max= 8960, avg=8913.50, stdev=52.55, samples=4 00:21:31.366 lat (msec) : 4=0.11%, 10=99.41%, 20=0.49% 00:21:31.366 cpu : usr=70.09%, sys=23.13%, ctx=8, majf=0, minf=7 00:21:31.366 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:31.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:31.366 issued rwts: total=17860,17890,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:31.366 00:21:31.366 Run status group 0 (all jobs): 00:21:31.366 READ: bw=34.8MiB/s (36.4MB/s), 34.8MiB/s-34.8MiB/s (36.4MB/s-36.4MB/s), io=69.8MiB (73.2MB), run=2007-2007msec 00:21:31.366 WRITE: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.9MiB (73.3MB), run=2007-2007msec 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:31.366 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:31.366 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:31.366 fio-3.35 00:21:31.366 Starting 1 thread 00:21:33.908 00:21:33.908 test: (groupid=0, jobs=1): err= 0: pid=88596: Fri Dec 6 13:15:22 2024 00:21:33.908 read: IOPS=7792, BW=122MiB/s (128MB/s)(244MiB/2007msec) 00:21:33.908 slat (usec): min=3, max=160, avg= 4.02, stdev= 2.20 00:21:33.908 clat (usec): min=2539, max=19092, avg=9659.23, stdev=2648.35 00:21:33.908 lat (usec): min=2542, max=19095, avg=9663.25, stdev=2648.66 00:21:33.908 clat percentiles (usec): 00:21:33.908 | 1.00th=[ 5014], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 7308], 00:21:33.908 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10028], 00:21:33.908 | 70.00th=[10945], 80.00th=[11863], 90.00th=[13304], 95.00th=[14615], 00:21:33.908 | 99.00th=[16581], 99.50th=[16909], 99.90th=[18220], 99.95th=[18744], 00:21:33.908 | 99.99th=[19006] 00:21:33.908 bw ( KiB/s): min=56032, max=76000, per=52.33%, avg=65248.00, stdev=8475.27, samples=4 00:21:33.908 iops : min= 3502, max= 4750, avg=4078.00, stdev=529.70, samples=4 00:21:33.908 write: IOPS=4602, BW=71.9MiB/s (75.4MB/s)(133MiB/1854msec); 0 zone resets 00:21:33.908 slat (usec): min=34, max=576, avg=39.24, stdev= 8.70 00:21:33.908 clat (usec): min=6402, max=19600, avg=11618.81, stdev=2081.85 00:21:33.908 lat (usec): min=6439, max=19648, avg=11658.06, stdev=2083.10 00:21:33.908 clat percentiles (usec): 00:21:33.908 | 1.00th=[ 8029], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9896], 00:21:33.908 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:21:33.908 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14746], 95.00th=[15795], 00:21:33.908 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:21:33.908 | 99.99th=[19530] 00:21:33.908 bw ( KiB/s): min=58112, max=79520, per=92.35%, avg=68008.00, stdev=8988.50, samples=4 00:21:33.908 iops : min= 3632, max= 4970, avg=4250.50, stdev=561.78, samples=4 00:21:33.908 lat (msec) : 4=0.14%, 10=46.53%, 20=53.33% 00:21:33.908 cpu : usr=72.78%, sys=17.80%, ctx=19, majf=0, minf=18 00:21:33.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:33.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:33.908 issued rwts: total=15640,8533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:33.908 00:21:33.908 Run status group 0 (all jobs): 00:21:33.908 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=244MiB (256MB), run=2007-2007msec 00:21:33.908 WRITE: bw=71.9MiB/s (75.4MB/s), 71.9MiB/s-71.9MiB/s (75.4MB/s-75.4MB/s), io=133MiB (140MB), run=1854-1854msec 00:21:33.908 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:33.908 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:33.908 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:33.908 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:33.908 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:33.908 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:33.908 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:34.166 rmmod nvme_tcp 00:21:34.166 rmmod nvme_fabrics 00:21:34.166 rmmod nvme_keyring 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 88425 ']' 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 88425 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 88425 ']' 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 88425 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88425 00:21:34.166 killing process with pid 88425 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88425' 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 88425 00:21:34.166 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 88425 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:34.423 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:34.423 13:15:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:21:34.682 00:21:34.682 real 0m8.973s 00:21:34.682 user 0m35.718s 00:21:34.682 sys 0m2.408s 00:21:34.682 ************************************ 00:21:34.682 END TEST nvmf_fio_host 00:21:34.682 ************************************ 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.682 ************************************ 00:21:34.682 START TEST nvmf_failover 00:21:34.682 ************************************ 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:34.682 * Looking for test storage... 00:21:34.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:21:34.682 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:34.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.941 --rc genhtml_branch_coverage=1 00:21:34.941 --rc genhtml_function_coverage=1 00:21:34.941 --rc genhtml_legend=1 00:21:34.941 --rc geninfo_all_blocks=1 00:21:34.941 --rc geninfo_unexecuted_blocks=1 00:21:34.941 00:21:34.941 ' 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:34.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.941 --rc genhtml_branch_coverage=1 00:21:34.941 --rc genhtml_function_coverage=1 00:21:34.941 --rc genhtml_legend=1 00:21:34.941 --rc geninfo_all_blocks=1 00:21:34.941 --rc geninfo_unexecuted_blocks=1 00:21:34.941 00:21:34.941 ' 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:34.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.941 --rc genhtml_branch_coverage=1 00:21:34.941 --rc genhtml_function_coverage=1 00:21:34.941 --rc genhtml_legend=1 00:21:34.941 --rc geninfo_all_blocks=1 00:21:34.941 --rc geninfo_unexecuted_blocks=1 00:21:34.941 00:21:34.941 ' 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:34.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.941 --rc genhtml_branch_coverage=1 00:21:34.941 --rc genhtml_function_coverage=1 00:21:34.941 --rc genhtml_legend=1 00:21:34.941 --rc geninfo_all_blocks=1 00:21:34.941 --rc geninfo_unexecuted_blocks=1 00:21:34.941 00:21:34.941 ' 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.941 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:34.941 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:34.942 Cannot find device "nvmf_init_br" 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:34.942 Cannot find device "nvmf_init_br2" 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:34.942 Cannot find device "nvmf_tgt_br" 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:34.942 Cannot find device "nvmf_tgt_br2" 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:34.942 Cannot find device "nvmf_init_br" 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:34.942 Cannot find device "nvmf_init_br2" 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:34.942 Cannot find device "nvmf_tgt_br" 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:34.942 Cannot find device "nvmf_tgt_br2" 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:34.942 Cannot find device "nvmf_br" 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:34.942 Cannot find device "nvmf_init_if" 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:34.942 Cannot find device "nvmf_init_if2" 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:34.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:34.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:34.942 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:35.200 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:35.201 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:35.201 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.122 ms 00:21:35.201 00:21:35.201 --- 10.0.0.3 ping statistics --- 00:21:35.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.201 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:35.201 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:35.201 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:21:35.201 00:21:35.201 --- 10.0.0.4 ping statistics --- 00:21:35.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.201 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:35.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:35.201 00:21:35.201 --- 10.0.0.1 ping statistics --- 00:21:35.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.201 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:35.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:21:35.201 00:21:35.201 --- 10.0.0.2 ping statistics --- 00:21:35.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.201 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=88870 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 88870 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88870 ']' 00:21:35.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.201 13:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:35.459 [2024-12-06 13:15:23.900046] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:21:35.459 [2024-12-06 13:15:23.900370] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.459 [2024-12-06 13:15:24.052684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:35.733 [2024-12-06 13:15:24.122167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.733 [2024-12-06 13:15:24.122412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.733 [2024-12-06 13:15:24.122587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.733 [2024-12-06 13:15:24.122732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.733 [2024-12-06 13:15:24.122787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.733 [2024-12-06 13:15:24.124212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.733 [2024-12-06 13:15:24.124363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.733 [2024-12-06 13:15:24.124356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.733 13:15:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.733 13:15:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:35.733 13:15:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:35.733 13:15:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.733 13:15:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:35.733 13:15:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.733 13:15:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:35.991 [2024-12-06 13:15:24.561195] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.991 13:15:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:36.251 Malloc0 00:21:36.510 13:15:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:36.768 13:15:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:37.027 13:15:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:37.591 [2024-12-06 13:15:25.925699] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:37.591 13:15:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:37.591 [2024-12-06 13:15:26.225941] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:37.850 13:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:21:38.109 [2024-12-06 13:15:26.538245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:21:38.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.109 13:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88974 00:21:38.109 13:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:38.109 13:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:38.109 13:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88974 /var/tmp/bdevperf.sock 00:21:38.109 13:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88974 ']' 00:21:38.109 13:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.109 13:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.109 13:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.109 13:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.109 13:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:39.041 13:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.041 13:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:39.041 13:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:39.606 NVMe0n1 00:21:39.607 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:39.880 00:21:39.880 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.880 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=89026 00:21:39.880 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:40.855 13:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:41.162 [2024-12-06 13:15:29.690447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.690746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.691159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.691186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.691276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.691286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.691294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.691303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.691313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.691324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.691570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.691597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 [2024-12-06 13:15:29.691607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f34930 is same with the state(6) to be set 00:21:41.162 13:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:44.447 13:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:44.705 00:21:44.705 13:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:44.964 [2024-12-06 13:15:33.497738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.964 [2024-12-06 13:15:33.497941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.497949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.497958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.497966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.497974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.497982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.497991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.497999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 [2024-12-06 13:15:33.498530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f353e0 is same with the state(6) to be set 00:21:44.965 13:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:48.311 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:48.311 [2024-12-06 13:15:36.815750] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:48.311 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:49.246 13:15:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:21:49.505 [2024-12-06 13:15:38.100989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.505 [2024-12-06 13:15:38.101155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.506 [2024-12-06 13:15:38.101874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.101996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.102004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.102012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.102020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.102028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.102037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.102045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.102053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.102061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.102069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.102078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.102087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.102095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 [2024-12-06 13:15:38.102103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbde0 is same with the state(6) to be set 00:21:49.507 13:15:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 89026 00:21:56.069 { 00:21:56.069 "results": [ 00:21:56.069 { 00:21:56.069 "job": "NVMe0n1", 00:21:56.069 "core_mask": "0x1", 00:21:56.069 "workload": "verify", 00:21:56.069 "status": "finished", 00:21:56.069 "verify_range": { 00:21:56.069 "start": 0, 00:21:56.069 "length": 16384 00:21:56.069 }, 00:21:56.069 "queue_depth": 128, 00:21:56.069 "io_size": 4096, 00:21:56.069 "runtime": 15.009634, 00:21:56.069 "iops": 8908.744876790466, 00:21:56.069 "mibps": 34.79978467496276, 00:21:56.069 "io_failed": 3573, 00:21:56.069 "io_timeout": 0, 00:21:56.069 "avg_latency_us": 13959.98876496335, 00:21:56.069 "min_latency_us": 640.4654545454546, 00:21:56.069 "max_latency_us": 20375.738181818182 00:21:56.069 } 00:21:56.069 ], 00:21:56.069 "core_count": 1 00:21:56.069 } 00:21:56.069 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 88974 00:21:56.069 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88974 ']' 00:21:56.069 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88974 00:21:56.069 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:21:56.069 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.069 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88974 00:21:56.069 killing process with pid 88974 00:21:56.069 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.070 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.070 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88974' 00:21:56.070 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88974 00:21:56.070 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88974 00:21:56.070 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:56.070 [2024-12-06 13:15:26.621935] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:21:56.070 [2024-12-06 13:15:26.622069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88974 ] 00:21:56.070 [2024-12-06 13:15:26.778583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.070 [2024-12-06 13:15:26.848410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.070 Running I/O for 15 seconds... 00:21:56.070 8868.00 IOPS, 34.64 MiB/s [2024-12-06T13:15:44.710Z] [2024-12-06 13:15:29.692142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.070 [2024-12-06 13:15:29.692189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.070 [2024-12-06 13:15:29.692983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.692999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.070 [2024-12-06 13:15:29.693013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.693033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.070 [2024-12-06 13:15:29.693048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.693063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.070 [2024-12-06 13:15:29.693077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.693093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.070 [2024-12-06 13:15:29.693107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.693122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.070 [2024-12-06 13:15:29.693136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.693151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.070 [2024-12-06 13:15:29.693165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.693181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.070 [2024-12-06 13:15:29.693194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.693209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.070 [2024-12-06 13:15:29.693223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.693238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.070 [2024-12-06 13:15:29.693253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.070 [2024-12-06 13:15:29.693269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.070 [2024-12-06 13:15:29.693283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.071 [2024-12-06 13:15:29.693312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.071 [2024-12-06 13:15:29.693349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.071 [2024-12-06 13:15:29.693378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.071 [2024-12-06 13:15:29.693407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.071 [2024-12-06 13:15:29.693437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.071 [2024-12-06 13:15:29.693478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.071 [2024-12-06 13:15:29.693508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.071 [2024-12-06 13:15:29.693543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.071 [2024-12-06 13:15:29.693572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.071 [2024-12-06 13:15:29.693601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.071 [2024-12-06 13:15:29.693630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.071 [2024-12-06 13:15:29.693659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.071 [2024-12-06 13:15:29.693689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.693726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.693757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.693786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.693828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.693858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.693887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.693916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.693946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.693976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.693992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.071 [2024-12-06 13:15:29.694512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.071 [2024-12-06 13:15:29.694540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.072 [2024-12-06 13:15:29.694968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.694983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.694996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.072 [2024-12-06 13:15:29.695795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.072 [2024-12-06 13:15:29.695810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.073 [2024-12-06 13:15:29.695824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.695840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.073 [2024-12-06 13:15:29.695854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.695887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.073 [2024-12-06 13:15:29.695903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86408 len:8 PRP1 0x0 PRP2 0x0 00:21:56.073 [2024-12-06 13:15:29.695917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.695952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.073 [2024-12-06 13:15:29.695964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.073 [2024-12-06 13:15:29.695975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:21:56.073 [2024-12-06 13:15:29.695989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.073 [2024-12-06 13:15:29.696013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.073 [2024-12-06 13:15:29.696024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86424 len:8 PRP1 0x0 PRP2 0x0 00:21:56.073 [2024-12-06 13:15:29.696037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.073 [2024-12-06 13:15:29.696068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.073 [2024-12-06 13:15:29.696079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86432 len:8 PRP1 0x0 PRP2 0x0 00:21:56.073 [2024-12-06 13:15:29.696092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.073 [2024-12-06 13:15:29.696116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.073 [2024-12-06 13:15:29.696126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86440 len:8 PRP1 0x0 PRP2 0x0 00:21:56.073 [2024-12-06 13:15:29.696140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.073 [2024-12-06 13:15:29.696172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.073 [2024-12-06 13:15:29.696183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86448 len:8 PRP1 0x0 PRP2 0x0 00:21:56.073 [2024-12-06 13:15:29.696207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.073 [2024-12-06 13:15:29.696231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.073 [2024-12-06 13:15:29.696241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86456 len:8 PRP1 0x0 PRP2 0x0 00:21:56.073 [2024-12-06 13:15:29.696254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.073 [2024-12-06 13:15:29.696279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.073 [2024-12-06 13:15:29.696289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86464 len:8 PRP1 0x0 PRP2 0x0 00:21:56.073 [2024-12-06 13:15:29.696303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.073 [2024-12-06 13:15:29.696334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.073 [2024-12-06 13:15:29.696354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86472 len:8 PRP1 0x0 PRP2 0x0 00:21:56.073 [2024-12-06 13:15:29.696367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.073 [2024-12-06 13:15:29.696391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.073 [2024-12-06 13:15:29.696402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86480 len:8 PRP1 0x0 PRP2 0x0 00:21:56.073 [2024-12-06 13:15:29.696415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.073 [2024-12-06 13:15:29.696439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.073 [2024-12-06 13:15:29.696460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86488 len:8 PRP1 0x0 PRP2 0x0 00:21:56.073 [2024-12-06 13:15:29.696476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.073 [2024-12-06 13:15:29.696505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.073 [2024-12-06 13:15:29.696516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86496 len:8 PRP1 0x0 PRP2 0x0 00:21:56.073 [2024-12-06 13:15:29.696529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696607] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:21:56.073 [2024-12-06 13:15:29.696680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.073 [2024-12-06 13:15:29.696701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.073 [2024-12-06 13:15:29.696752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.073 [2024-12-06 13:15:29.696780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.073 [2024-12-06 13:15:29.696808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:29.696822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:56.073 [2024-12-06 13:15:29.696890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1777a00 (9): Bad file descriptor 00:21:56.073 [2024-12-06 13:15:29.700770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:56.073 [2024-12-06 13:15:29.730220] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:56.073 8719.00 IOPS, 34.06 MiB/s [2024-12-06T13:15:44.713Z] 8847.00 IOPS, 34.56 MiB/s [2024-12-06T13:15:44.713Z] 8898.75 IOPS, 34.76 MiB/s [2024-12-06T13:15:44.713Z] 8915.00 IOPS, 34.82 MiB/s [2024-12-06T13:15:44.713Z] [2024-12-06 13:15:33.499101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.073 [2024-12-06 13:15:33.499147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:33.499174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.073 [2024-12-06 13:15:33.499190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:33.499207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.073 [2024-12-06 13:15:33.499222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:33.499238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.073 [2024-12-06 13:15:33.499252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:33.499267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.073 [2024-12-06 13:15:33.499282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:33.499298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.073 [2024-12-06 13:15:33.499312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:33.499327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.073 [2024-12-06 13:15:33.499341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:33.499356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.073 [2024-12-06 13:15:33.499398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.073 [2024-12-06 13:15:33.499428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.073 [2024-12-06 13:15:33.499443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.499977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.499993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-12-06 13:15:33.500474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-12-06 13:15:33.500488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.500969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.500982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.075 [2024-12-06 13:15:33.501613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-12-06 13:15:33.501642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-12-06 13:15:33.501671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-12-06 13:15:33.501700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-12-06 13:15:33.501716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-12-06 13:15:33.501735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.501750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.076 [2024-12-06 13:15:33.501763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.501787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.076 [2024-12-06 13:15:33.501813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.501830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.076 [2024-12-06 13:15:33.501844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.501859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.501873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.501888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.501902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.501917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.501930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.501946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.501959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.501975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.501988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.076 [2024-12-06 13:15:33.502933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-12-06 13:15:33.502957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:33.502971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:33.502987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:33.503000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:33.503016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:33.503029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:33.503044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:33.503058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:33.503073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:33.503086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:33.503101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:33.503115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:33.503146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.077 [2024-12-06 13:15:33.503160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.077 [2024-12-06 13:15:33.503171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105232 len:8 PRP1 0x0 PRP2 0x0 00:21:56.077 [2024-12-06 13:15:33.503185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:33.503248] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:21:56.077 [2024-12-06 13:15:33.503304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-12-06 13:15:33.503326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:33.503350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-12-06 13:15:33.503363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:33.503378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-12-06 13:15:33.503391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:33.503406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-12-06 13:15:33.503419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:33.503432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:56.077 [2024-12-06 13:15:33.503491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1777a00 (9): Bad file descriptor 00:21:56.077 [2024-12-06 13:15:33.507410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:56.077 [2024-12-06 13:15:33.535552] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:21:56.077 8879.00 IOPS, 34.68 MiB/s [2024-12-06T13:15:44.717Z] 8923.29 IOPS, 34.86 MiB/s [2024-12-06T13:15:44.717Z] 8945.12 IOPS, 34.94 MiB/s [2024-12-06T13:15:44.717Z] 8951.33 IOPS, 34.97 MiB/s [2024-12-06T13:15:44.717Z] [2024-12-06 13:15:38.102119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-12-06 13:15:38.102167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-12-06 13:15:38.102201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-12-06 13:15:38.102238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-12-06 13:15:38.102266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777a00 is same with the state(6) to be set 00:21:56.077 [2024-12-06 13:15:38.102532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.102567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.102621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.102662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.102704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.102745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.102793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.102854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.102885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.102915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.102944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.102973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.102989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.103003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.103019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.103032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.103048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.103062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.103078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.103091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.103107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.103120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.103136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.103150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.103166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.103179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.103196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.103217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.103253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-12-06 13:15:38.103269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-12-06 13:15:38.103285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.103974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.103989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-12-06 13:15:38.104416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-12-06 13:15:38.104430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-12-06 13:15:38.104473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-12-06 13:15:38.104514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-12-06 13:15:38.104543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-12-06 13:15:38.104574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-12-06 13:15:38.104603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-12-06 13:15:38.104632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-12-06 13:15:38.104661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-12-06 13:15:38.104697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-12-06 13:15:38.104726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.104756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.104786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.104815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.104849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.104886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.104917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.104946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.104975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.104991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.079 [2024-12-06 13:15:38.105680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-12-06 13:15:38.105697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.080 [2024-12-06 13:15:38.105712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.105727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.080 [2024-12-06 13:15:38.105741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.105756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.080 [2024-12-06 13:15:38.105770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.105785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.080 [2024-12-06 13:15:38.105817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.105835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.080 [2024-12-06 13:15:38.105848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.105864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.105883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.105900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.105913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.105929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.105942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.105963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.105977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.105992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-12-06 13:15:38.106701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e8ee0 is same with the state(6) to be set 00:21:56.080 [2024-12-06 13:15:38.106732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.080 [2024-12-06 13:15:38.106743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.080 [2024-12-06 13:15:38.106753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53088 len:8 PRP1 0x0 PRP2 0x0 00:21:56.080 [2024-12-06 13:15:38.106767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-12-06 13:15:38.106831] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:21:56.080 [2024-12-06 13:15:38.106850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:56.080 [2024-12-06 13:15:38.110743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:56.080 [2024-12-06 13:15:38.110785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1777a00 (9): Bad file descriptor 00:21:56.080 [2024-12-06 13:15:38.140669] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:21:56.080 8912.10 IOPS, 34.81 MiB/s [2024-12-06T13:15:44.720Z] 8901.64 IOPS, 34.77 MiB/s [2024-12-06T13:15:44.720Z] 8890.58 IOPS, 34.73 MiB/s [2024-12-06T13:15:44.720Z] 8894.31 IOPS, 34.74 MiB/s [2024-12-06T13:15:44.720Z] 8900.29 IOPS, 34.77 MiB/s [2024-12-06T13:15:44.720Z] 8910.20 IOPS, 34.81 MiB/s 00:21:56.080 Latency(us) 00:21:56.080 [2024-12-06T13:15:44.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.080 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:56.080 Verification LBA range: start 0x0 length 0x4000 00:21:56.080 NVMe0n1 : 15.01 8908.74 34.80 238.05 0.00 13959.99 640.47 20375.74 00:21:56.080 [2024-12-06T13:15:44.720Z] =================================================================================================================== 00:21:56.080 [2024-12-06T13:15:44.720Z] Total : 8908.74 34.80 238.05 0.00 13959.99 640.47 20375.74 00:21:56.081 Received shutdown signal, test time was about 15.000000 seconds 00:21:56.081 00:21:56.081 Latency(us) 00:21:56.081 [2024-12-06T13:15:44.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.081 [2024-12-06T13:15:44.721Z] =================================================================================================================== 00:21:56.081 [2024-12-06T13:15:44.721Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.081 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:56.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.081 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:56.081 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:56.081 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=89227 00:21:56.081 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 89227 /var/tmp/bdevperf.sock 00:21:56.081 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 89227 ']' 00:21:56.081 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:56.081 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.081 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.081 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.081 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.081 13:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:56.081 13:15:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.081 13:15:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:56.081 13:15:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:56.081 [2024-12-06 13:15:44.453848] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:56.081 13:15:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:21:56.338 [2024-12-06 13:15:44.722095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:21:56.338 13:15:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:56.596 NVMe0n1 00:21:56.596 13:15:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:56.852 00:21:56.852 13:15:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:57.433 00:21:57.433 13:15:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:57.433 13:15:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:57.690 13:15:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:57.948 13:15:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:01.228 13:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.228 13:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:01.228 13:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=89357 00:22:01.228 13:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.228 13:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 89357 00:22:02.714 { 00:22:02.714 "results": [ 00:22:02.714 { 00:22:02.714 "job": "NVMe0n1", 00:22:02.714 "core_mask": "0x1", 00:22:02.714 "workload": "verify", 00:22:02.714 "status": "finished", 00:22:02.714 "verify_range": { 00:22:02.714 "start": 0, 00:22:02.714 "length": 16384 00:22:02.714 }, 00:22:02.714 "queue_depth": 128, 00:22:02.714 "io_size": 4096, 00:22:02.714 "runtime": 1.011778, 00:22:02.714 "iops": 7961.232602408829, 00:22:02.714 "mibps": 31.098564853159488, 00:22:02.714 "io_failed": 0, 00:22:02.714 "io_timeout": 0, 00:22:02.714 "avg_latency_us": 15976.749288640594, 00:22:02.714 "min_latency_us": 2502.2836363636366, 00:22:02.714 "max_latency_us": 25380.305454545454 00:22:02.714 } 00:22:02.714 ], 00:22:02.714 "core_count": 1 00:22:02.714 } 00:22:02.714 13:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:02.714 [2024-12-06 13:15:43.815718] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:22:02.714 [2024-12-06 13:15:43.815850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89227 ] 00:22:02.714 [2024-12-06 13:15:43.959495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.714 [2024-12-06 13:15:44.019741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.714 [2024-12-06 13:15:46.423443] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:02.714 [2024-12-06 13:15:46.423579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.714 [2024-12-06 13:15:46.423607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.714 [2024-12-06 13:15:46.423626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.714 [2024-12-06 13:15:46.423641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.714 [2024-12-06 13:15:46.423656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.714 [2024-12-06 13:15:46.423670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.714 [2024-12-06 13:15:46.423685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.714 [2024-12-06 13:15:46.423699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.714 [2024-12-06 13:15:46.423714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:02.714 [2024-12-06 13:15:46.423765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:02.714 [2024-12-06 13:15:46.423797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ffa00 (9): Bad file descriptor 00:22:02.714 [2024-12-06 13:15:46.435414] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:02.714 Running I/O for 1 seconds... 00:22:02.714 7927.00 IOPS, 30.96 MiB/s 00:22:02.714 Latency(us) 00:22:02.714 [2024-12-06T13:15:51.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.714 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:02.714 Verification LBA range: start 0x0 length 0x4000 00:22:02.714 NVMe0n1 : 1.01 7961.23 31.10 0.00 0.00 15976.75 2502.28 25380.31 00:22:02.714 [2024-12-06T13:15:51.354Z] =================================================================================================================== 00:22:02.714 [2024-12-06T13:15:51.354Z] Total : 7961.23 31.10 0.00 0.00 15976.75 2502.28 25380.31 00:22:02.714 13:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:02.714 13:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:02.971 13:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:03.228 13:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:03.228 13:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:03.486 13:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:03.744 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:07.021 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:07.021 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:07.280 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 89227 00:22:07.280 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 89227 ']' 00:22:07.280 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 89227 00:22:07.280 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:07.280 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.280 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89227 00:22:07.280 killing process with pid 89227 00:22:07.280 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:07.280 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:07.280 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89227' 00:22:07.280 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 89227 00:22:07.280 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 89227 00:22:07.280 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:07.538 13:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:07.796 rmmod nvme_tcp 00:22:07.796 rmmod nvme_fabrics 00:22:07.796 rmmod nvme_keyring 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 88870 ']' 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 88870 00:22:07.796 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88870 ']' 00:22:07.797 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88870 00:22:07.797 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:07.797 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.797 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88870 00:22:07.797 killing process with pid 88870 00:22:07.797 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:07.797 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:07.797 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88870' 00:22:07.797 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88870 00:22:07.797 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88870 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:08.055 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:22:08.313 00:22:08.313 real 0m33.638s 00:22:08.313 user 2m11.335s 00:22:08.313 sys 0m4.806s 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:08.313 ************************************ 00:22:08.313 END TEST nvmf_failover 00:22:08.313 ************************************ 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.313 ************************************ 00:22:08.313 START TEST nvmf_host_discovery 00:22:08.313 ************************************ 00:22:08.313 13:15:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:08.572 * Looking for test storage... 00:22:08.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:08.572 13:15:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:08.572 13:15:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:22:08.572 13:15:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:08.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.572 --rc genhtml_branch_coverage=1 00:22:08.572 --rc genhtml_function_coverage=1 00:22:08.572 --rc genhtml_legend=1 00:22:08.572 --rc geninfo_all_blocks=1 00:22:08.572 --rc geninfo_unexecuted_blocks=1 00:22:08.572 00:22:08.572 ' 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:08.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.572 --rc genhtml_branch_coverage=1 00:22:08.572 --rc genhtml_function_coverage=1 00:22:08.572 --rc genhtml_legend=1 00:22:08.572 --rc geninfo_all_blocks=1 00:22:08.572 --rc geninfo_unexecuted_blocks=1 00:22:08.572 00:22:08.572 ' 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:08.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.572 --rc genhtml_branch_coverage=1 00:22:08.572 --rc genhtml_function_coverage=1 00:22:08.572 --rc genhtml_legend=1 00:22:08.572 --rc geninfo_all_blocks=1 00:22:08.572 --rc geninfo_unexecuted_blocks=1 00:22:08.572 00:22:08.572 ' 00:22:08.572 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:08.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.572 --rc genhtml_branch_coverage=1 00:22:08.572 --rc genhtml_function_coverage=1 00:22:08.572 --rc genhtml_legend=1 00:22:08.572 --rc geninfo_all_blocks=1 00:22:08.572 --rc geninfo_unexecuted_blocks=1 00:22:08.572 00:22:08.572 ' 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:08.573 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:08.573 Cannot find device "nvmf_init_br" 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:08.573 Cannot find device "nvmf_init_br2" 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:08.573 Cannot find device "nvmf_tgt_br" 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:08.573 Cannot find device "nvmf_tgt_br2" 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:08.573 Cannot find device "nvmf_init_br" 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:08.573 Cannot find device "nvmf_init_br2" 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:22:08.573 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:08.573 Cannot find device "nvmf_tgt_br" 00:22:08.574 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:22:08.574 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:08.574 Cannot find device "nvmf_tgt_br2" 00:22:08.574 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:22:08.574 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:08.574 Cannot find device "nvmf_br" 00:22:08.574 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:22:08.574 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:08.833 Cannot find device "nvmf_init_if" 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:08.833 Cannot find device "nvmf_init_if2" 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:08.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:08.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:08.833 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:08.833 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:22:08.833 00:22:08.833 --- 10.0.0.3 ping statistics --- 00:22:08.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.833 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:08.833 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:08.833 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:22:08.833 00:22:08.833 --- 10.0.0.4 ping statistics --- 00:22:08.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.833 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:08.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:08.833 00:22:08.833 --- 10.0.0.1 ping statistics --- 00:22:08.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.833 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:08.833 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:09.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:22:09.091 00:22:09.091 --- 10.0.0.2 ping statistics --- 00:22:09.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.092 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=89724 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 89724 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 89724 ']' 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.092 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.092 [2024-12-06 13:15:57.557209] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:22:09.092 [2024-12-06 13:15:57.557296] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.092 [2024-12-06 13:15:57.700940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.350 [2024-12-06 13:15:57.764221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.350 [2024-12-06 13:15:57.764269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.350 [2024-12-06 13:15:57.764281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.350 [2024-12-06 13:15:57.764290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.350 [2024-12-06 13:15:57.764297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.350 [2024-12-06 13:15:57.764693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.350 [2024-12-06 13:15:57.940766] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.350 [2024-12-06 13:15:57.952917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.350 null0 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.350 null1 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=89762 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 89762 /tmp/host.sock 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 89762 ']' 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.350 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.350 13:15:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.609 [2024-12-06 13:15:58.046142] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:22:09.609 [2024-12-06 13:15:58.046252] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89762 ] 00:22:09.609 [2024-12-06 13:15:58.197194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.868 [2024-12-06 13:15:58.266234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:09.868 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.126 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:10.126 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:10.127 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.386 [2024-12-06 13:15:58.789202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:10.386 13:15:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.668 13:15:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:10.668 13:15:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:10.950 [2024-12-06 13:15:59.426107] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:10.950 [2024-12-06 13:15:59.426152] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:10.950 [2024-12-06 13:15:59.426192] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:10.950 [2024-12-06 13:15:59.512238] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:22:10.950 [2024-12-06 13:15:59.566702] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:22:10.950 [2024-12-06 13:15:59.567647] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x126e580:1 started. 00:22:10.950 [2024-12-06 13:15:59.569556] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:10.950 [2024-12-06 13:15:59.569584] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:10.950 [2024-12-06 13:15:59.574555] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x126e580 was disconnected and freed. delete nvme_qpair. 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.518 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:11.778 [2024-12-06 13:16:00.268351] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x126e920:1 started. 00:22:11.778 [2024-12-06 13:16:00.274624] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x126e920 was disconnected and freed. delete nvme_qpair. 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:11.778 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.779 [2024-12-06 13:16:00.374042] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:11.779 [2024-12-06 13:16:00.375087] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:11.779 [2024-12-06 13:16:00.375127] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:11.779 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:12.038 [2024-12-06 13:16:00.461579] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:12.038 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:12.039 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:12.039 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:12.039 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:12.039 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:12.039 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:12.039 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.039 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.039 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:12.039 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:12.039 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:12.039 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.039 [2024-12-06 13:16:00.520088] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:22:12.039 [2024-12-06 13:16:00.520172] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:12.039 [2024-12-06 13:16:00.520185] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:12.039 [2024-12-06 13:16:00.520192] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:12.039 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:12.039 13:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:12.974 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:12.974 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:12.974 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:12.974 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:12.974 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:12.974 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.974 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.974 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:12.974 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:12.974 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.234 [2024-12-06 13:16:01.667190] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:13.234 [2024-12-06 13:16:01.667250] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:13.234 [2024-12-06 13:16:01.668785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.234 [2024-12-06 13:16:01.668822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.234 [2024-12-06 13:16:01.668837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.234 [2024-12-06 13:16:01.668847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.234 [2024-12-06 13:16:01.668858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.234 [2024-12-06 13:16:01.668868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.234 [2024-12-06 13:16:01.668878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.234 [2024-12-06 13:16:01.668887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.234 [2024-12-06 13:16:01.668898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6850 is same with the state(6) to be set 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.234 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:13.235 [2024-12-06 13:16:01.678731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e6850 (9): Bad file descriptor 00:22:13.235 [2024-12-06 13:16:01.688749] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:13.235 [2024-12-06 13:16:01.688778] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:13.235 [2024-12-06 13:16:01.688785] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:13.235 [2024-12-06 13:16:01.688791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:13.235 [2024-12-06 13:16:01.688828] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:13.235 [2024-12-06 13:16:01.688932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.235 [2024-12-06 13:16:01.688957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e6850 with addr=10.0.0.3, port=4420 00:22:13.235 [2024-12-06 13:16:01.688971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6850 is same with the state(6) to be set 00:22:13.235 [2024-12-06 13:16:01.688990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e6850 (9): Bad file descriptor 00:22:13.235 [2024-12-06 13:16:01.689022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:13.235 [2024-12-06 13:16:01.689035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:13.235 [2024-12-06 13:16:01.689047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:13.235 [2024-12-06 13:16:01.689057] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:13.235 [2024-12-06 13:16:01.689064] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:13.235 [2024-12-06 13:16:01.689069] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.235 [2024-12-06 13:16:01.698839] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:13.235 [2024-12-06 13:16:01.698867] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:13.235 [2024-12-06 13:16:01.698874] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:13.235 [2024-12-06 13:16:01.698880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:13.235 [2024-12-06 13:16:01.698907] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:13.235 [2024-12-06 13:16:01.698971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.235 [2024-12-06 13:16:01.698993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e6850 with addr=10.0.0.3, port=4420 00:22:13.235 [2024-12-06 13:16:01.699005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6850 is same with the state(6) to be set 00:22:13.235 [2024-12-06 13:16:01.699022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e6850 (9): Bad file descriptor 00:22:13.235 [2024-12-06 13:16:01.699038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:13.235 [2024-12-06 13:16:01.699047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:13.235 [2024-12-06 13:16:01.699057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:13.235 [2024-12-06 13:16:01.699066] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:13.235 [2024-12-06 13:16:01.699072] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:13.235 [2024-12-06 13:16:01.699077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:13.235 [2024-12-06 13:16:01.708928] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:13.235 [2024-12-06 13:16:01.708975] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:13.235 [2024-12-06 13:16:01.708982] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:13.235 [2024-12-06 13:16:01.708989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:13.235 [2024-12-06 13:16:01.709027] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:13.235 [2024-12-06 13:16:01.709117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.235 [2024-12-06 13:16:01.709142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e6850 with addr=10.0.0.3, port=4420 00:22:13.235 [2024-12-06 13:16:01.709155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6850 is same with the state(6) to be set 00:22:13.235 [2024-12-06 13:16:01.709187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e6850 (9): Bad file descriptor 00:22:13.235 [2024-12-06 13:16:01.709312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:13.235 [2024-12-06 13:16:01.709328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:13.235 [2024-12-06 13:16:01.709340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:13.235 [2024-12-06 13:16:01.709350] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:13.235 [2024-12-06 13:16:01.709356] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:13.235 [2024-12-06 13:16:01.709362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:13.235 [2024-12-06 13:16:01.719047] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:13.235 [2024-12-06 13:16:01.719101] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:13.235 [2024-12-06 13:16:01.719108] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:13.235 [2024-12-06 13:16:01.719115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:13.235 [2024-12-06 13:16:01.719156] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:13.235 [2024-12-06 13:16:01.719275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.235 [2024-12-06 13:16:01.719301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e6850 with addr=10.0.0.3, port=4420 00:22:13.235 [2024-12-06 13:16:01.719315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6850 is same with the state(6) to be set 00:22:13.235 [2024-12-06 13:16:01.719335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e6850 (9): Bad file descriptor 00:22:13.235 [2024-12-06 13:16:01.719364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:13.235 [2024-12-06 13:16:01.719376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:13.235 [2024-12-06 13:16:01.719388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:13.235 [2024-12-06 13:16:01.719397] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:13.235 [2024-12-06 13:16:01.719404] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:13.235 [2024-12-06 13:16:01.719409] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:13.235 [2024-12-06 13:16:01.729171] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:13.235 [2024-12-06 13:16:01.729199] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:13.235 [2024-12-06 13:16:01.729207] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:13.235 [2024-12-06 13:16:01.729213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:13.235 [2024-12-06 13:16:01.729247] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:13.235 [2024-12-06 13:16:01.729336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.235 [2024-12-06 13:16:01.729358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e6850 with addr=10.0.0.3, port=4420 00:22:13.235 [2024-12-06 13:16:01.729370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6850 is same with the state(6) to be set 00:22:13.235 [2024-12-06 13:16:01.729388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e6850 (9): Bad file descriptor 00:22:13.235 [2024-12-06 13:16:01.729415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:13.235 [2024-12-06 13:16:01.729427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:13.235 [2024-12-06 13:16:01.729437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:13.235 [2024-12-06 13:16:01.729446] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:13.235 [2024-12-06 13:16:01.729466] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:13.235 [2024-12-06 13:16:01.729472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:13.235 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.236 [2024-12-06 13:16:01.739256] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:13.236 [2024-12-06 13:16:01.739299] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:13.236 [2024-12-06 13:16:01.739306] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:13.236 [2024-12-06 13:16:01.739311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:13.236 [2024-12-06 13:16:01.739335] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:13.236 [2024-12-06 13:16:01.739387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.236 [2024-12-06 13:16:01.739406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e6850 with addr=10.0.0.3, port=4420 00:22:13.236 [2024-12-06 13:16:01.739418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6850 is same with the state(6) to be set 00:22:13.236 [2024-12-06 13:16:01.739433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e6850 (9): Bad file descriptor 00:22:13.236 [2024-12-06 13:16:01.739471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:13.236 [2024-12-06 13:16:01.739485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:13.236 [2024-12-06 13:16:01.739495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:13.236 [2024-12-06 13:16:01.739503] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:13.236 [2024-12-06 13:16:01.739509] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:13.236 [2024-12-06 13:16:01.739514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:13.236 [2024-12-06 13:16:01.749345] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:13.236 [2024-12-06 13:16:01.749370] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:13.236 [2024-12-06 13:16:01.749376] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:13.236 [2024-12-06 13:16:01.749382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:13.236 [2024-12-06 13:16:01.749407] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:13.236 [2024-12-06 13:16:01.749477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.236 [2024-12-06 13:16:01.749499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e6850 with addr=10.0.0.3, port=4420 00:22:13.236 [2024-12-06 13:16:01.749511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6850 is same with the state(6) to be set 00:22:13.236 [2024-12-06 13:16:01.749527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e6850 (9): Bad file descriptor 00:22:13.236 [2024-12-06 13:16:01.749543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:13.236 [2024-12-06 13:16:01.749553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:13.236 [2024-12-06 13:16:01.749563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:13.236 [2024-12-06 13:16:01.749572] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:13.236 [2024-12-06 13:16:01.749578] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:13.236 [2024-12-06 13:16:01.749583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:13.236 [2024-12-06 13:16:01.752954] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:22:13.236 [2024-12-06 13:16:01.752985] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:13.236 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:13.494 13:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.494 13:16:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.867 [2024-12-06 13:16:03.086682] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:14.867 [2024-12-06 13:16:03.086727] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:14.867 [2024-12-06 13:16:03.086749] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:14.868 [2024-12-06 13:16:03.172815] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:22:14.868 [2024-12-06 13:16:03.231343] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:22:14.868 [2024-12-06 13:16:03.232186] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1240530:1 started. 00:22:14.868 [2024-12-06 13:16:03.234580] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:14.868 [2024-12-06 13:16:03.234628] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.868 [2024-12-06 13:16:03.236055] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1240530 was disconnected and freed. delete nvme_qpair. 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.868 2024/12/06 13:16:03 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:14.868 request: 00:22:14.868 { 00:22:14.868 "method": "bdev_nvme_start_discovery", 00:22:14.868 "params": { 00:22:14.868 "name": "nvme", 00:22:14.868 "trtype": "tcp", 00:22:14.868 "traddr": "10.0.0.3", 00:22:14.868 "adrfam": "ipv4", 00:22:14.868 "trsvcid": "8009", 00:22:14.868 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:14.868 "wait_for_attach": true 00:22:14.868 } 00:22:14.868 } 00:22:14.868 Got JSON-RPC error response 00:22:14.868 GoRPCClient: error on JSON-RPC call 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.868 2024/12/06 13:16:03 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:14.868 request: 00:22:14.868 { 00:22:14.868 "method": "bdev_nvme_start_discovery", 00:22:14.868 "params": { 00:22:14.868 "name": "nvme_second", 00:22:14.868 "trtype": "tcp", 00:22:14.868 "traddr": "10.0.0.3", 00:22:14.868 "adrfam": "ipv4", 00:22:14.868 "trsvcid": "8009", 00:22:14.868 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:14.868 "wait_for_attach": true 00:22:14.868 } 00:22:14.868 } 00:22:14.868 Got JSON-RPC error response 00:22:14.868 GoRPCClient: error on JSON-RPC call 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:14.868 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.869 13:16:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.241 [2024-12-06 13:16:04.503030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:16.241 [2024-12-06 13:16:04.503130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1279c30 with addr=10.0.0.3, port=8010 00:22:16.241 [2024-12-06 13:16:04.503161] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:16.241 [2024-12-06 13:16:04.503176] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:16.241 [2024-12-06 13:16:04.503190] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:22:17.187 [2024-12-06 13:16:05.502996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.187 [2024-12-06 13:16:05.503073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1279c30 with addr=10.0.0.3, port=8010 00:22:17.187 [2024-12-06 13:16:05.503107] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:17.188 [2024-12-06 13:16:05.503118] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:17.188 [2024-12-06 13:16:05.503130] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:22:18.120 [2024-12-06 13:16:06.502812] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:22:18.120 2024/12/06 13:16:06 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:18.120 request: 00:22:18.120 { 00:22:18.120 "method": "bdev_nvme_start_discovery", 00:22:18.120 "params": { 00:22:18.120 "name": "nvme_second", 00:22:18.120 "trtype": "tcp", 00:22:18.120 "traddr": "10.0.0.3", 00:22:18.120 "adrfam": "ipv4", 00:22:18.120 "trsvcid": "8010", 00:22:18.120 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:18.120 "wait_for_attach": false, 00:22:18.120 "attach_timeout_ms": 3000 00:22:18.120 } 00:22:18.120 } 00:22:18.120 Got JSON-RPC error response 00:22:18.120 GoRPCClient: error on JSON-RPC call 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 89762 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.120 rmmod nvme_tcp 00:22:18.120 rmmod nvme_fabrics 00:22:18.120 rmmod nvme_keyring 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:18.120 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:18.121 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 89724 ']' 00:22:18.121 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 89724 00:22:18.121 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 89724 ']' 00:22:18.121 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 89724 00:22:18.121 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:18.121 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.121 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89724 00:22:18.121 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:18.121 killing process with pid 89724 00:22:18.121 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:18.121 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89724' 00:22:18.121 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 89724 00:22:18.121 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 89724 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:18.379 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:18.379 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:18.379 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:18.379 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:22:18.750 00:22:18.750 real 0m10.297s 00:22:18.750 user 0m20.028s 00:22:18.750 sys 0m1.714s 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.750 ************************************ 00:22:18.750 END TEST nvmf_host_discovery 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.750 ************************************ 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.750 ************************************ 00:22:18.750 START TEST nvmf_host_multipath_status 00:22:18.750 ************************************ 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:18.750 * Looking for test storage... 00:22:18.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:22:18.750 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.030 --rc genhtml_branch_coverage=1 00:22:19.030 --rc genhtml_function_coverage=1 00:22:19.030 --rc genhtml_legend=1 00:22:19.030 --rc geninfo_all_blocks=1 00:22:19.030 --rc geninfo_unexecuted_blocks=1 00:22:19.030 00:22:19.030 ' 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.030 --rc genhtml_branch_coverage=1 00:22:19.030 --rc genhtml_function_coverage=1 00:22:19.030 --rc genhtml_legend=1 00:22:19.030 --rc geninfo_all_blocks=1 00:22:19.030 --rc geninfo_unexecuted_blocks=1 00:22:19.030 00:22:19.030 ' 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.030 --rc genhtml_branch_coverage=1 00:22:19.030 --rc genhtml_function_coverage=1 00:22:19.030 --rc genhtml_legend=1 00:22:19.030 --rc geninfo_all_blocks=1 00:22:19.030 --rc geninfo_unexecuted_blocks=1 00:22:19.030 00:22:19.030 ' 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.030 --rc genhtml_branch_coverage=1 00:22:19.030 --rc genhtml_function_coverage=1 00:22:19.030 --rc genhtml_legend=1 00:22:19.030 --rc geninfo_all_blocks=1 00:22:19.030 --rc geninfo_unexecuted_blocks=1 00:22:19.030 00:22:19.030 ' 00:22:19.030 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:19.031 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:19.031 Cannot find device "nvmf_init_br" 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:19.031 Cannot find device "nvmf_init_br2" 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:19.031 Cannot find device "nvmf_tgt_br" 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:19.031 Cannot find device "nvmf_tgt_br2" 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:19.031 Cannot find device "nvmf_init_br" 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:19.031 Cannot find device "nvmf_init_br2" 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:22:19.031 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:19.032 Cannot find device "nvmf_tgt_br" 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:19.032 Cannot find device "nvmf_tgt_br2" 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:19.032 Cannot find device "nvmf_br" 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:19.032 Cannot find device "nvmf_init_if" 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:19.032 Cannot find device "nvmf_init_if2" 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:19.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:19.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:19.032 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:19.291 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:19.292 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:19.292 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:22:19.292 00:22:19.292 --- 10.0.0.3 ping statistics --- 00:22:19.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.292 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:19.292 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:19.292 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:22:19.292 00:22:19.292 --- 10.0.0.4 ping statistics --- 00:22:19.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.292 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:19.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:19.292 00:22:19.292 --- 10.0.0.1 ping statistics --- 00:22:19.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.292 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:19.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:22:19.292 00:22:19.292 --- 10.0.0.2 ping statistics --- 00:22:19.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.292 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=90276 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 90276 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 90276 ']' 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.292 13:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:19.292 [2024-12-06 13:16:07.915798] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:22:19.292 [2024-12-06 13:16:07.915879] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.551 [2024-12-06 13:16:08.065022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:19.551 [2024-12-06 13:16:08.138401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.551 [2024-12-06 13:16:08.138476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.551 [2024-12-06 13:16:08.138493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.551 [2024-12-06 13:16:08.138504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.551 [2024-12-06 13:16:08.138513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.551 [2024-12-06 13:16:08.139734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.551 [2024-12-06 13:16:08.139748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.486 13:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.486 13:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:20.486 13:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:20.486 13:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.486 13:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:20.486 13:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.486 13:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=90276 00:22:20.486 13:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:20.744 [2024-12-06 13:16:09.317322] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.744 13:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:21.022 Malloc0 00:22:21.280 13:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:21.537 13:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:21.796 13:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:22.055 [2024-12-06 13:16:10.671362] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:22.314 13:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:22.314 [2024-12-06 13:16:10.939547] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:22.571 13:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:22.571 13:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=90385 00:22:22.571 13:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:22.571 13:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 90385 /var/tmp/bdevperf.sock 00:22:22.571 13:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 90385 ']' 00:22:22.571 13:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.571 13:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.571 13:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.571 13:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.571 13:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:22.844 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.844 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:22.844 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:23.137 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:23.701 Nvme0n1 00:22:23.702 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:23.960 Nvme0n1 00:22:23.960 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:23.960 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:26.491 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:26.491 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:22:26.491 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:26.748 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:27.747 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:27.747 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:27.747 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.747 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:28.006 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.006 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:28.006 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:28.006 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.265 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:28.265 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:28.265 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.265 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:28.833 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.833 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:28.833 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:28.833 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.091 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.091 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:29.091 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.091 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:29.350 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.350 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:29.350 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.350 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:29.608 13:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.608 13:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:29.608 13:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:30.175 13:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:30.433 13:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:31.367 13:16:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:31.367 13:16:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:31.367 13:16:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.367 13:16:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:31.624 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:31.624 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:31.624 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.624 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:31.893 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.893 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:31.893 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.893 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:32.458 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.458 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:32.458 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.458 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:32.715 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.715 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:32.715 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.715 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:32.973 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.973 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:32.973 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:32.973 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.230 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.230 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:33.230 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:33.796 13:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:22:34.055 13:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:34.989 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:34.989 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:34.989 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:34.989 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.247 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.247 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:35.247 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:35.247 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.813 13:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:35.813 13:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:35.813 13:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.813 13:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:36.071 13:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.071 13:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:36.071 13:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:36.071 13:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.329 13:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.329 13:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:36.330 13:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.330 13:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:36.588 13:16:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.588 13:16:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:36.588 13:16:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:36.588 13:16:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.154 13:16:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.154 13:16:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:37.154 13:16:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:37.412 13:16:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:37.671 13:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:38.607 13:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:38.607 13:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:38.607 13:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.607 13:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:38.865 13:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.865 13:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:38.865 13:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.865 13:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:39.431 13:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:39.431 13:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:39.431 13:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:39.431 13:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.431 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.431 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:39.431 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.431 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:39.691 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.691 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:39.691 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:39.691 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.258 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.258 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:40.258 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.258 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:40.258 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:40.258 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:40.259 13:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:40.825 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:41.083 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:42.018 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:42.018 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:42.018 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:42.018 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.276 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:42.276 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:42.276 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.276 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:42.843 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:42.843 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:42.843 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:42.843 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.102 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.102 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:43.102 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.102 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:43.361 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.361 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:43.361 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.361 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:43.620 13:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:43.620 13:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:43.620 13:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.620 13:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:44.189 13:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:44.189 13:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:44.189 13:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:44.189 13:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:44.540 13:16:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:45.915 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:45.915 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:45.915 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.915 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:45.915 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:45.915 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:45.915 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.915 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:46.174 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.174 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:46.174 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:46.174 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.739 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.739 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:46.739 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.739 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:46.996 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.996 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:46.996 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:46.996 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.252 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:47.252 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:47.252 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:47.252 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.815 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.815 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:48.072 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:48.072 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:22:48.637 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:48.895 13:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:49.826 13:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:49.826 13:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:49.826 13:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.826 13:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:50.084 13:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.084 13:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:50.084 13:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.084 13:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:50.648 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.648 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:50.648 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.648 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:50.905 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.905 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:50.905 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.905 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:51.162 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.162 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:51.162 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.162 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:51.730 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.730 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:51.730 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.730 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:51.991 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.991 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:51.991 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:52.558 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:52.558 13:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:53.933 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:53.933 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:53.933 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.933 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:53.933 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:53.933 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:53.933 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.933 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:54.192 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.192 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:54.192 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.192 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:54.759 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.759 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:54.759 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.759 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:55.016 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.016 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:55.016 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.016 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:55.274 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.274 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:55.274 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:55.274 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.531 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.531 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:55.531 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:56.097 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:22:56.355 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:57.291 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:57.291 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:57.291 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.291 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:57.857 13:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.857 13:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:57.857 13:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.857 13:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:58.115 13:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.115 13:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:58.115 13:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:58.115 13:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.374 13:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.374 13:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:58.374 13:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.374 13:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:58.633 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.633 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:58.633 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:58.633 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.199 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.199 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:59.199 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.199 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:59.458 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.458 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:59.458 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:59.744 13:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:00.013 13:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:00.947 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:00.947 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:00.947 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.947 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:01.206 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.206 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:01.206 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.206 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:01.771 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:01.771 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:01.771 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.771 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:02.029 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.029 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:02.029 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.029 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:02.287 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.287 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:02.287 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.287 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:02.854 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.854 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:02.854 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.854 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:02.854 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:02.854 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 90385 00:23:02.854 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 90385 ']' 00:23:02.854 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 90385 00:23:02.854 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:03.116 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.116 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90385 00:23:03.116 killing process with pid 90385 00:23:03.116 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:03.116 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:03.116 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90385' 00:23:03.116 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 90385 00:23:03.116 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 90385 00:23:03.116 { 00:23:03.116 "results": [ 00:23:03.116 { 00:23:03.116 "job": "Nvme0n1", 00:23:03.116 "core_mask": "0x4", 00:23:03.116 "workload": "verify", 00:23:03.116 "status": "terminated", 00:23:03.116 "verify_range": { 00:23:03.116 "start": 0, 00:23:03.116 "length": 16384 00:23:03.116 }, 00:23:03.116 "queue_depth": 128, 00:23:03.116 "io_size": 4096, 00:23:03.116 "runtime": 38.902672, 00:23:03.116 "iops": 8080.4218280944815, 00:23:03.116 "mibps": 31.56414776599407, 00:23:03.116 "io_failed": 0, 00:23:03.116 "io_timeout": 0, 00:23:03.116 "avg_latency_us": 15811.850042772243, 00:23:03.116 "min_latency_us": 121.94909090909091, 00:23:03.116 "max_latency_us": 5033164.8 00:23:03.116 } 00:23:03.116 ], 00:23:03.116 "core_count": 1 00:23:03.116 } 00:23:03.116 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 90385 00:23:03.116 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:03.116 [2024-12-06 13:16:11.011706] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:23:03.116 [2024-12-06 13:16:11.011805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90385 ] 00:23:03.116 [2024-12-06 13:16:11.155956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.116 [2024-12-06 13:16:11.219119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.116 Running I/O for 90 seconds... 00:23:03.116 9046.00 IOPS, 35.34 MiB/s [2024-12-06T13:16:51.756Z] 9103.00 IOPS, 35.56 MiB/s [2024-12-06T13:16:51.756Z] 9130.33 IOPS, 35.67 MiB/s [2024-12-06T13:16:51.756Z] 9028.50 IOPS, 35.27 MiB/s [2024-12-06T13:16:51.756Z] 8946.20 IOPS, 34.95 MiB/s [2024-12-06T13:16:51.756Z] 8976.17 IOPS, 35.06 MiB/s [2024-12-06T13:16:51.756Z] 8937.14 IOPS, 34.91 MiB/s [2024-12-06T13:16:51.756Z] 8925.25 IOPS, 34.86 MiB/s [2024-12-06T13:16:51.756Z] 8843.33 IOPS, 34.54 MiB/s [2024-12-06T13:16:51.756Z] 8764.20 IOPS, 34.24 MiB/s [2024-12-06T13:16:51.756Z] 8728.64 IOPS, 34.10 MiB/s [2024-12-06T13:16:51.756Z] 8738.50 IOPS, 34.13 MiB/s [2024-12-06T13:16:51.756Z] 8769.00 IOPS, 34.25 MiB/s [2024-12-06T13:16:51.756Z] 8812.71 IOPS, 34.42 MiB/s [2024-12-06T13:16:51.756Z] 8853.20 IOPS, 34.58 MiB/s [2024-12-06T13:16:51.756Z] 8881.44 IOPS, 34.69 MiB/s [2024-12-06T13:16:51.756Z] [2024-12-06 13:16:29.142624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.116 [2024-12-06 13:16:29.142690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.142750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.142771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.142795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.142812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.142834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.142850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.142872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.142888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.142910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.142926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.142948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.142964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.142985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:03.116 [2024-12-06 13:16:29.143583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.116 [2024-12-06 13:16:29.143601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.143623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.117 [2024-12-06 13:16:29.143639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.143661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.117 [2024-12-06 13:16:29.143677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.143698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.117 [2024-12-06 13:16:29.143716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.143738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.143754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.143775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.143791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.143813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.143830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.143851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.143867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.143889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.143905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.143927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.143942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.143964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.143980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.144216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.144242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.144270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.144301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.144327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.144344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.144368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.144385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.144542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.144568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.144594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.144611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.144635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.144651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.144842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.144876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.144906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.144922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.144947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.144962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.144986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.145002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.145042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.145082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.145136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.145178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.145218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.145258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.145299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.145339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.145378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.117 [2024-12-06 13:16:29.145418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.117 [2024-12-06 13:16:29.145474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.117 [2024-12-06 13:16:29.145517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.117 [2024-12-06 13:16:29.145557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:03.117 [2024-12-06 13:16:29.145581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.117 [2024-12-06 13:16:29.145597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.145621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.118 [2024-12-06 13:16:29.145636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.145669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.118 [2024-12-06 13:16:29.145687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.145711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.118 [2024-12-06 13:16:29.145726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.145750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.118 [2024-12-06 13:16:29.145766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.145790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.118 [2024-12-06 13:16:29.145822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.145847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.118 [2024-12-06 13:16:29.145863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.145887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.118 [2024-12-06 13:16:29.145903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.145927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.118 [2024-12-06 13:16:29.145942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.145966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.118 [2024-12-06 13:16:29.145994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.118 [2024-12-06 13:16:29.146039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.118 [2024-12-06 13:16:29.146079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.146962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.146988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.147004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.147030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.147046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.147072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.147088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.147114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.147129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.147155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.147171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.147209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.147225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.147251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.147266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.147292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.147308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.147334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.147349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.147375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.118 [2024-12-06 13:16:29.147402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:03.118 [2024-12-06 13:16:29.147430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.119 [2024-12-06 13:16:29.147446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.147486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.147502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.147528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.147545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.147571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.147587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.147613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.147628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.147654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.147670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.147696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.147712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.147738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.147753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.147779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.147795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.147821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.147837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.147863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.147880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.147906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.147929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.147966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.147981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:03.119 [2024-12-06 13:16:29.148829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.119 [2024-12-06 13:16:29.148845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:03.119 8636.94 IOPS, 33.74 MiB/s [2024-12-06T13:16:51.759Z] 8157.11 IOPS, 31.86 MiB/s [2024-12-06T13:16:51.759Z] 7727.79 IOPS, 30.19 MiB/s [2024-12-06T13:16:51.759Z] 7341.40 IOPS, 28.68 MiB/s [2024-12-06T13:16:51.759Z] 6991.81 IOPS, 27.31 MiB/s [2024-12-06T13:16:51.759Z] 6870.36 IOPS, 26.84 MiB/s [2024-12-06T13:16:51.760Z] 6923.57 IOPS, 27.05 MiB/s [2024-12-06T13:16:51.760Z] 6977.83 IOPS, 27.26 MiB/s [2024-12-06T13:16:51.760Z] 7095.72 IOPS, 27.72 MiB/s [2024-12-06T13:16:51.760Z] 7164.23 IOPS, 27.99 MiB/s [2024-12-06T13:16:51.760Z] 7260.56 IOPS, 28.36 MiB/s [2024-12-06T13:16:51.760Z] 7431.36 IOPS, 29.03 MiB/s [2024-12-06T13:16:51.760Z] 7513.76 IOPS, 29.35 MiB/s [2024-12-06T13:16:51.760Z] 7562.77 IOPS, 29.54 MiB/s [2024-12-06T13:16:51.760Z] 7604.16 IOPS, 29.70 MiB/s [2024-12-06T13:16:51.760Z] 7626.69 IOPS, 29.79 MiB/s [2024-12-06T13:16:51.760Z] 7729.24 IOPS, 30.19 MiB/s [2024-12-06T13:16:51.760Z] 7848.68 IOPS, 30.66 MiB/s [2024-12-06T13:16:51.760Z] 7966.03 IOPS, 31.12 MiB/s [2024-12-06T13:16:51.760Z] [2024-12-06 13:16:48.424915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.120 [2024-12-06 13:16:48.425224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.120 [2024-12-06 13:16:48.425262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.425980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.425995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.426017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.120 [2024-12-06 13:16:48.426032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.426054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.120 [2024-12-06 13:16:48.426070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.426092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.120 [2024-12-06 13:16:48.426107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.426129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.120 [2024-12-06 13:16:48.426145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.426167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.120 [2024-12-06 13:16:48.426183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.426205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.120 [2024-12-06 13:16:48.426221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.426242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.120 [2024-12-06 13:16:48.426271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.426295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.120 [2024-12-06 13:16:48.426311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.426332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.120 [2024-12-06 13:16:48.426348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:03.120 [2024-12-06 13:16:48.426369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.426385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.426406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.426422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.426444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.426476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.426499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.426516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.426539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.121 [2024-12-06 13:16:48.426555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.426576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.121 [2024-12-06 13:16:48.426592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.426614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.121 [2024-12-06 13:16:48.426630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.429565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.429604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.429636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.429653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.429676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.429713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.429736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.429752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.429773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.121 [2024-12-06 13:16:48.429789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.429829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.121 [2024-12-06 13:16:48.429846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.429868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.121 [2024-12-06 13:16:48.429883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.429905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.429920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.429942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.429957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.429978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.429993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.430015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.430030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.430058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.430073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.430094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.430109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.430130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.430146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.430169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.430185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.430216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.430233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.430254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.430270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.430292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.430308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.430884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.430913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.430940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.430957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.430979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.430995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.431018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.431034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:03.121 [2024-12-06 13:16:48.431056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.121 [2024-12-06 13:16:48.431072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:03.121 8058.83 IOPS, 31.48 MiB/s [2024-12-06T13:16:51.761Z] 8091.03 IOPS, 31.61 MiB/s [2024-12-06T13:16:51.761Z] 8081.66 IOPS, 31.57 MiB/s [2024-12-06T13:16:51.761Z] Received shutdown signal, test time was about 38.903513 seconds 00:23:03.121 00:23:03.121 Latency(us) 00:23:03.121 [2024-12-06T13:16:51.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.121 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:03.121 Verification LBA range: start 0x0 length 0x4000 00:23:03.121 Nvme0n1 : 38.90 8080.42 31.56 0.00 0.00 15811.85 121.95 5033164.80 00:23:03.121 [2024-12-06T13:16:51.761Z] =================================================================================================================== 00:23:03.121 [2024-12-06T13:16:51.761Z] Total : 8080.42 31.56 0.00 0.00 15811.85 121.95 5033164.80 00:23:03.121 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.381 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:03.381 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:03.381 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:03.381 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:03.381 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:03.639 rmmod nvme_tcp 00:23:03.639 rmmod nvme_fabrics 00:23:03.639 rmmod nvme_keyring 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 90276 ']' 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 90276 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 90276 ']' 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 90276 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90276 00:23:03.639 killing process with pid 90276 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90276' 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 90276 00:23:03.639 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 90276 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:03.898 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:23:04.156 00:23:04.156 real 0m45.389s 00:23:04.156 user 2m29.973s 00:23:04.156 sys 0m10.864s 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:04.156 ************************************ 00:23:04.156 END TEST nvmf_host_multipath_status 00:23:04.156 ************************************ 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.156 ************************************ 00:23:04.156 START TEST nvmf_discovery_remove_ifc 00:23:04.156 ************************************ 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:04.156 * Looking for test storage... 00:23:04.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:04.156 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:04.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.416 --rc genhtml_branch_coverage=1 00:23:04.416 --rc genhtml_function_coverage=1 00:23:04.416 --rc genhtml_legend=1 00:23:04.416 --rc geninfo_all_blocks=1 00:23:04.416 --rc geninfo_unexecuted_blocks=1 00:23:04.416 00:23:04.416 ' 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:04.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.416 --rc genhtml_branch_coverage=1 00:23:04.416 --rc genhtml_function_coverage=1 00:23:04.416 --rc genhtml_legend=1 00:23:04.416 --rc geninfo_all_blocks=1 00:23:04.416 --rc geninfo_unexecuted_blocks=1 00:23:04.416 00:23:04.416 ' 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:04.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.416 --rc genhtml_branch_coverage=1 00:23:04.416 --rc genhtml_function_coverage=1 00:23:04.416 --rc genhtml_legend=1 00:23:04.416 --rc geninfo_all_blocks=1 00:23:04.416 --rc geninfo_unexecuted_blocks=1 00:23:04.416 00:23:04.416 ' 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:04.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.416 --rc genhtml_branch_coverage=1 00:23:04.416 --rc genhtml_function_coverage=1 00:23:04.416 --rc genhtml_legend=1 00:23:04.416 --rc geninfo_all_blocks=1 00:23:04.416 --rc geninfo_unexecuted_blocks=1 00:23:04.416 00:23:04.416 ' 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.416 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:04.417 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:04.417 Cannot find device "nvmf_init_br" 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:04.417 Cannot find device "nvmf_init_br2" 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:04.417 Cannot find device "nvmf_tgt_br" 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:04.417 Cannot find device "nvmf_tgt_br2" 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:04.417 Cannot find device "nvmf_init_br" 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:04.417 Cannot find device "nvmf_init_br2" 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:04.417 Cannot find device "nvmf_tgt_br" 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:04.417 Cannot find device "nvmf_tgt_br2" 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:04.417 Cannot find device "nvmf_br" 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:23:04.417 13:16:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:04.417 Cannot find device "nvmf_init_if" 00:23:04.417 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:23:04.417 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:04.417 Cannot find device "nvmf_init_if2" 00:23:04.417 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:23:04.417 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:04.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:04.417 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:23:04.417 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:04.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:04.417 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:23:04.417 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:04.417 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:04.417 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:04.417 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:04.676 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:04.677 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:04.677 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:23:04.677 00:23:04.677 --- 10.0.0.3 ping statistics --- 00:23:04.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.677 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:04.677 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:04.677 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:23:04.677 00:23:04.677 --- 10.0.0.4 ping statistics --- 00:23:04.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.677 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:04.677 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:04.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:04.935 00:23:04.935 --- 10.0.0.1 ping statistics --- 00:23:04.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.935 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:04.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:23:04.935 00:23:04.935 --- 10.0.0.2 ping statistics --- 00:23:04.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.935 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=91785 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 91785 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91785 ']' 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.935 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.935 [2024-12-06 13:16:53.422692] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:23:04.935 [2024-12-06 13:16:53.422815] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.935 [2024-12-06 13:16:53.571318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.193 [2024-12-06 13:16:53.633442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.193 [2024-12-06 13:16:53.633546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.193 [2024-12-06 13:16:53.633561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.193 [2024-12-06 13:16:53.633570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.193 [2024-12-06 13:16:53.633579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.193 [2024-12-06 13:16:53.634014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.193 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.193 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:05.193 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:05.193 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:05.193 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:05.193 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.193 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:05.193 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.193 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:05.193 [2024-12-06 13:16:53.818057] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.193 [2024-12-06 13:16:53.826214] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:05.451 null0 00:23:05.451 [2024-12-06 13:16:53.858133] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:05.451 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.451 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91820 00:23:05.451 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:05.451 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91820 /tmp/host.sock 00:23:05.452 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91820 ']' 00:23:05.452 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:05.452 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.452 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:05.452 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:05.452 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.452 13:16:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:05.452 [2024-12-06 13:16:53.945741] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:23:05.452 [2024-12-06 13:16:53.946338] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91820 ] 00:23:05.710 [2024-12-06 13:16:54.098816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.710 [2024-12-06 13:16:54.196259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.644 13:16:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.644 13:16:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:06.644 13:16:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.644 13:16:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:06.644 13:16:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.644 13:16:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:06.644 13:16:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.644 13:16:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:06.644 13:16:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.644 13:16:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:06.644 13:16:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.644 13:16:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:06.644 13:16:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.644 13:16:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.574 [2024-12-06 13:16:56.111526] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:07.574 [2024-12-06 13:16:56.111571] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:07.574 [2024-12-06 13:16:56.111593] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:07.574 [2024-12-06 13:16:56.197823] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:23:07.832 [2024-12-06 13:16:56.252326] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:23:07.832 [2024-12-06 13:16:56.253314] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6b4110:1 started. 00:23:07.832 [2024-12-06 13:16:56.255238] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:07.832 [2024-12-06 13:16:56.255295] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:07.832 [2024-12-06 13:16:56.255331] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:07.832 [2024-12-06 13:16:56.255348] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:23:07.832 [2024-12-06 13:16:56.255375] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:07.832 [2024-12-06 13:16:56.260021] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6b4110 was disconnected and freed. delete nvme_qpair. 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:07.832 13:16:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:08.764 13:16:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:08.764 13:16:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.764 13:16:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:08.764 13:16:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:08.764 13:16:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:08.765 13:16:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.765 13:16:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:09.045 13:16:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.045 13:16:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:09.045 13:16:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:09.981 13:16:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:09.981 13:16:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:09.981 13:16:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:09.981 13:16:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.981 13:16:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:09.981 13:16:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.981 13:16:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:09.981 13:16:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.981 13:16:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:09.982 13:16:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:10.916 13:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:10.916 13:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.916 13:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.916 13:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:10.916 13:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:10.916 13:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:10.916 13:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:10.916 13:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.916 13:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:10.916 13:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:12.290 13:17:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:12.290 13:17:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.290 13:17:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:12.290 13:17:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.290 13:17:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:12.290 13:17:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:12.290 13:17:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:12.290 13:17:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.290 13:17:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:12.290 13:17:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:13.250 13:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:13.250 13:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.250 13:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.250 13:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:13.250 13:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:13.250 13:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:13.250 13:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:13.250 13:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.250 13:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:13.250 13:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:13.250 [2024-12-06 13:17:01.682916] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:13.250 [2024-12-06 13:17:01.683149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.250 [2024-12-06 13:17:01.683170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.250 [2024-12-06 13:17:01.683185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.250 [2024-12-06 13:17:01.683195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.250 [2024-12-06 13:17:01.683205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.250 [2024-12-06 13:17:01.683216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.250 [2024-12-06 13:17:01.683226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.250 [2024-12-06 13:17:01.683236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.250 [2024-12-06 13:17:01.683247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.250 [2024-12-06 13:17:01.683256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.250 [2024-12-06 13:17:01.683267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f6290 is same with the state(6) to be set 00:23:13.250 [2024-12-06 13:17:01.692911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f6290 (9): Bad file descriptor 00:23:13.250 [2024-12-06 13:17:01.702929] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:13.250 [2024-12-06 13:17:01.702952] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:13.250 [2024-12-06 13:17:01.702959] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:13.250 [2024-12-06 13:17:01.702965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:13.250 [2024-12-06 13:17:01.703008] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:14.182 13:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:14.183 13:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.183 13:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:14.183 13:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.183 13:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:14.183 13:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:14.183 13:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:14.183 [2024-12-06 13:17:02.731601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:14.183 [2024-12-06 13:17:02.732048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f6290 with addr=10.0.0.3, port=4420 00:23:14.183 [2024-12-06 13:17:02.732103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f6290 is same with the state(6) to be set 00:23:14.183 [2024-12-06 13:17:02.732174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f6290 (9): Bad file descriptor 00:23:14.183 [2024-12-06 13:17:02.733135] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:14.183 [2024-12-06 13:17:02.733256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:14.183 [2024-12-06 13:17:02.733317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:14.183 [2024-12-06 13:17:02.733353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:14.183 [2024-12-06 13:17:02.733387] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:14.183 [2024-12-06 13:17:02.733419] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:14.183 [2024-12-06 13:17:02.733436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:14.183 [2024-12-06 13:17:02.733511] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:14.183 [2024-12-06 13:17:02.733536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:14.183 13:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.183 13:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:14.183 13:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:15.119 [2024-12-06 13:17:03.733652] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:15.119 [2024-12-06 13:17:03.733945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:15.119 [2024-12-06 13:17:03.733987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:15.119 [2024-12-06 13:17:03.733999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:15.119 [2024-12-06 13:17:03.734009] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:15.119 [2024-12-06 13:17:03.734019] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:15.119 [2024-12-06 13:17:03.734026] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:15.119 [2024-12-06 13:17:03.734032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:15.119 [2024-12-06 13:17:03.734067] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:23:15.119 [2024-12-06 13:17:03.734119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.119 [2024-12-06 13:17:03.734135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.119 [2024-12-06 13:17:03.734149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.119 [2024-12-06 13:17:03.734159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.119 [2024-12-06 13:17:03.734170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.119 [2024-12-06 13:17:03.734179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.119 [2024-12-06 13:17:03.734189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.119 [2024-12-06 13:17:03.734199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.119 [2024-12-06 13:17:03.734209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.119 [2024-12-06 13:17:03.734218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.119 [2024-12-06 13:17:03.734229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:15.119 [2024-12-06 13:17:03.734274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x622820 (9): Bad file descriptor 00:23:15.119 [2024-12-06 13:17:03.735263] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:15.119 [2024-12-06 13:17:03.735286] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:15.378 13:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:16.313 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:16.313 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.313 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.313 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.313 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:16.313 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:16.313 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:16.313 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.313 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:16.313 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:17.324 [2024-12-06 13:17:05.743097] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:17.324 [2024-12-06 13:17:05.743372] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:17.324 [2024-12-06 13:17:05.743423] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:17.324 [2024-12-06 13:17:05.829201] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:23:17.324 [2024-12-06 13:17:05.883668] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:23:17.324 [2024-12-06 13:17:05.884265] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x68b290:1 started. 00:23:17.324 [2024-12-06 13:17:05.885633] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:17.324 [2024-12-06 13:17:05.885681] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:17.324 [2024-12-06 13:17:05.885706] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:17.324 [2024-12-06 13:17:05.885723] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:23:17.324 [2024-12-06 13:17:05.885733] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:17.324 [2024-12-06 13:17:05.891620] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x68b290 was disconnected and freed. delete nvme_qpair. 00:23:17.324 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:17.324 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.324 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.324 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.324 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:17.324 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:17.324 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:17.582 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.582 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:17.582 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:17.582 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91820 00:23:17.582 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91820 ']' 00:23:17.582 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91820 00:23:17.582 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:17.582 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.582 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91820 00:23:17.582 killing process with pid 91820 00:23:17.582 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:17.582 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:17.582 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91820' 00:23:17.582 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91820 00:23:17.582 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91820 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:17.841 rmmod nvme_tcp 00:23:17.841 rmmod nvme_fabrics 00:23:17.841 rmmod nvme_keyring 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 91785 ']' 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 91785 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91785 ']' 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91785 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91785 00:23:17.841 killing process with pid 91785 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91785' 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91785 00:23:17.841 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91785 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:18.099 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:23:18.358 00:23:18.358 real 0m14.159s 00:23:18.358 user 0m25.157s 00:23:18.358 sys 0m1.753s 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.358 ************************************ 00:23:18.358 END TEST nvmf_discovery_remove_ifc 00:23:18.358 ************************************ 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.358 ************************************ 00:23:18.358 START TEST nvmf_identify_kernel_target 00:23:18.358 ************************************ 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:18.358 * Looking for test storage... 00:23:18.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:23:18.358 13:17:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:18.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.619 --rc genhtml_branch_coverage=1 00:23:18.619 --rc genhtml_function_coverage=1 00:23:18.619 --rc genhtml_legend=1 00:23:18.619 --rc geninfo_all_blocks=1 00:23:18.619 --rc geninfo_unexecuted_blocks=1 00:23:18.619 00:23:18.619 ' 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:18.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.619 --rc genhtml_branch_coverage=1 00:23:18.619 --rc genhtml_function_coverage=1 00:23:18.619 --rc genhtml_legend=1 00:23:18.619 --rc geninfo_all_blocks=1 00:23:18.619 --rc geninfo_unexecuted_blocks=1 00:23:18.619 00:23:18.619 ' 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:18.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.619 --rc genhtml_branch_coverage=1 00:23:18.619 --rc genhtml_function_coverage=1 00:23:18.619 --rc genhtml_legend=1 00:23:18.619 --rc geninfo_all_blocks=1 00:23:18.619 --rc geninfo_unexecuted_blocks=1 00:23:18.619 00:23:18.619 ' 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:18.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.619 --rc genhtml_branch_coverage=1 00:23:18.619 --rc genhtml_function_coverage=1 00:23:18.619 --rc genhtml_legend=1 00:23:18.619 --rc geninfo_all_blocks=1 00:23:18.619 --rc geninfo_unexecuted_blocks=1 00:23:18.619 00:23:18.619 ' 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.619 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.620 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:18.620 Cannot find device "nvmf_init_br" 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:18.620 Cannot find device "nvmf_init_br2" 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:18.620 Cannot find device "nvmf_tgt_br" 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:18.620 Cannot find device "nvmf_tgt_br2" 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:18.620 Cannot find device "nvmf_init_br" 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:18.620 Cannot find device "nvmf_init_br2" 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:18.620 Cannot find device "nvmf_tgt_br" 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:18.620 Cannot find device "nvmf_tgt_br2" 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:18.620 Cannot find device "nvmf_br" 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:18.620 Cannot find device "nvmf_init_if" 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:18.620 Cannot find device "nvmf_init_if2" 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:18.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:18.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:18.620 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:18.879 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:18.879 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:23:18.879 00:23:18.879 --- 10.0.0.3 ping statistics --- 00:23:18.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.879 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:18.879 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:18.879 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:23:18.879 00:23:18.879 --- 10.0.0.4 ping statistics --- 00:23:18.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.879 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:18.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:18.879 00:23:18.879 --- 10.0.0.1 ping statistics --- 00:23:18.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.879 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:18.879 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:18.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:23:18.879 00:23:18.879 --- 10.0.0.2 ping statistics --- 00:23:18.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.880 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:23:18.880 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.880 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:23:18.880 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:18.880 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.880 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:18.880 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:18.880 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.880 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:18.880 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:19.137 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:19.137 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:19.137 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:19.137 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:19.138 13:17:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:19.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:19.396 Waiting for block devices as requested 00:23:19.396 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:19.654 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:19.654 No valid GPT data, bailing 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:19.654 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:19.655 No valid GPT data, bailing 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:19.655 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:19.913 No valid GPT data, bailing 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:19.913 No valid GPT data, bailing 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:19.913 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:19.914 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:19.914 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:19.914 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:23:19.914 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:19.914 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:19.914 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:19.914 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -a 10.0.0.1 -t tcp -s 4420 00:23:19.914 00:23:19.914 Discovery Log Number of Records 2, Generation counter 2 00:23:19.914 =====Discovery Log Entry 0====== 00:23:19.914 trtype: tcp 00:23:19.914 adrfam: ipv4 00:23:19.914 subtype: current discovery subsystem 00:23:19.914 treq: not specified, sq flow control disable supported 00:23:19.914 portid: 1 00:23:19.914 trsvcid: 4420 00:23:19.914 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:19.914 traddr: 10.0.0.1 00:23:19.914 eflags: none 00:23:19.914 sectype: none 00:23:19.914 =====Discovery Log Entry 1====== 00:23:19.914 trtype: tcp 00:23:19.914 adrfam: ipv4 00:23:19.914 subtype: nvme subsystem 00:23:19.914 treq: not specified, sq flow control disable supported 00:23:19.914 portid: 1 00:23:19.914 trsvcid: 4420 00:23:19.914 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:19.914 traddr: 10.0.0.1 00:23:19.914 eflags: none 00:23:19.914 sectype: none 00:23:19.914 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:19.914 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:20.172 ===================================================== 00:23:20.172 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:20.172 ===================================================== 00:23:20.172 Controller Capabilities/Features 00:23:20.172 ================================ 00:23:20.172 Vendor ID: 0000 00:23:20.172 Subsystem Vendor ID: 0000 00:23:20.172 Serial Number: 41f555a334f8441de2ef 00:23:20.172 Model Number: Linux 00:23:20.172 Firmware Version: 6.8.9-20 00:23:20.172 Recommended Arb Burst: 0 00:23:20.172 IEEE OUI Identifier: 00 00 00 00:23:20.172 Multi-path I/O 00:23:20.172 May have multiple subsystem ports: No 00:23:20.172 May have multiple controllers: No 00:23:20.172 Associated with SR-IOV VF: No 00:23:20.172 Max Data Transfer Size: Unlimited 00:23:20.172 Max Number of Namespaces: 0 00:23:20.172 Max Number of I/O Queues: 1024 00:23:20.172 NVMe Specification Version (VS): 1.3 00:23:20.172 NVMe Specification Version (Identify): 1.3 00:23:20.172 Maximum Queue Entries: 1024 00:23:20.172 Contiguous Queues Required: No 00:23:20.172 Arbitration Mechanisms Supported 00:23:20.172 Weighted Round Robin: Not Supported 00:23:20.172 Vendor Specific: Not Supported 00:23:20.172 Reset Timeout: 7500 ms 00:23:20.172 Doorbell Stride: 4 bytes 00:23:20.173 NVM Subsystem Reset: Not Supported 00:23:20.173 Command Sets Supported 00:23:20.173 NVM Command Set: Supported 00:23:20.173 Boot Partition: Not Supported 00:23:20.173 Memory Page Size Minimum: 4096 bytes 00:23:20.173 Memory Page Size Maximum: 4096 bytes 00:23:20.173 Persistent Memory Region: Not Supported 00:23:20.173 Optional Asynchronous Events Supported 00:23:20.173 Namespace Attribute Notices: Not Supported 00:23:20.173 Firmware Activation Notices: Not Supported 00:23:20.173 ANA Change Notices: Not Supported 00:23:20.173 PLE Aggregate Log Change Notices: Not Supported 00:23:20.173 LBA Status Info Alert Notices: Not Supported 00:23:20.173 EGE Aggregate Log Change Notices: Not Supported 00:23:20.173 Normal NVM Subsystem Shutdown event: Not Supported 00:23:20.173 Zone Descriptor Change Notices: Not Supported 00:23:20.173 Discovery Log Change Notices: Supported 00:23:20.173 Controller Attributes 00:23:20.173 128-bit Host Identifier: Not Supported 00:23:20.173 Non-Operational Permissive Mode: Not Supported 00:23:20.173 NVM Sets: Not Supported 00:23:20.173 Read Recovery Levels: Not Supported 00:23:20.173 Endurance Groups: Not Supported 00:23:20.173 Predictable Latency Mode: Not Supported 00:23:20.173 Traffic Based Keep ALive: Not Supported 00:23:20.173 Namespace Granularity: Not Supported 00:23:20.173 SQ Associations: Not Supported 00:23:20.173 UUID List: Not Supported 00:23:20.173 Multi-Domain Subsystem: Not Supported 00:23:20.173 Fixed Capacity Management: Not Supported 00:23:20.173 Variable Capacity Management: Not Supported 00:23:20.173 Delete Endurance Group: Not Supported 00:23:20.173 Delete NVM Set: Not Supported 00:23:20.173 Extended LBA Formats Supported: Not Supported 00:23:20.173 Flexible Data Placement Supported: Not Supported 00:23:20.173 00:23:20.173 Controller Memory Buffer Support 00:23:20.173 ================================ 00:23:20.173 Supported: No 00:23:20.173 00:23:20.173 Persistent Memory Region Support 00:23:20.173 ================================ 00:23:20.173 Supported: No 00:23:20.173 00:23:20.173 Admin Command Set Attributes 00:23:20.173 ============================ 00:23:20.173 Security Send/Receive: Not Supported 00:23:20.173 Format NVM: Not Supported 00:23:20.173 Firmware Activate/Download: Not Supported 00:23:20.173 Namespace Management: Not Supported 00:23:20.173 Device Self-Test: Not Supported 00:23:20.173 Directives: Not Supported 00:23:20.173 NVMe-MI: Not Supported 00:23:20.173 Virtualization Management: Not Supported 00:23:20.173 Doorbell Buffer Config: Not Supported 00:23:20.173 Get LBA Status Capability: Not Supported 00:23:20.173 Command & Feature Lockdown Capability: Not Supported 00:23:20.173 Abort Command Limit: 1 00:23:20.173 Async Event Request Limit: 1 00:23:20.173 Number of Firmware Slots: N/A 00:23:20.173 Firmware Slot 1 Read-Only: N/A 00:23:20.173 Firmware Activation Without Reset: N/A 00:23:20.173 Multiple Update Detection Support: N/A 00:23:20.173 Firmware Update Granularity: No Information Provided 00:23:20.173 Per-Namespace SMART Log: No 00:23:20.173 Asymmetric Namespace Access Log Page: Not Supported 00:23:20.173 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:20.173 Command Effects Log Page: Not Supported 00:23:20.173 Get Log Page Extended Data: Supported 00:23:20.173 Telemetry Log Pages: Not Supported 00:23:20.173 Persistent Event Log Pages: Not Supported 00:23:20.173 Supported Log Pages Log Page: May Support 00:23:20.173 Commands Supported & Effects Log Page: Not Supported 00:23:20.173 Feature Identifiers & Effects Log Page:May Support 00:23:20.173 NVMe-MI Commands & Effects Log Page: May Support 00:23:20.173 Data Area 4 for Telemetry Log: Not Supported 00:23:20.173 Error Log Page Entries Supported: 1 00:23:20.173 Keep Alive: Not Supported 00:23:20.173 00:23:20.173 NVM Command Set Attributes 00:23:20.173 ========================== 00:23:20.173 Submission Queue Entry Size 00:23:20.173 Max: 1 00:23:20.173 Min: 1 00:23:20.173 Completion Queue Entry Size 00:23:20.173 Max: 1 00:23:20.173 Min: 1 00:23:20.173 Number of Namespaces: 0 00:23:20.173 Compare Command: Not Supported 00:23:20.173 Write Uncorrectable Command: Not Supported 00:23:20.173 Dataset Management Command: Not Supported 00:23:20.173 Write Zeroes Command: Not Supported 00:23:20.173 Set Features Save Field: Not Supported 00:23:20.173 Reservations: Not Supported 00:23:20.173 Timestamp: Not Supported 00:23:20.173 Copy: Not Supported 00:23:20.173 Volatile Write Cache: Not Present 00:23:20.173 Atomic Write Unit (Normal): 1 00:23:20.173 Atomic Write Unit (PFail): 1 00:23:20.173 Atomic Compare & Write Unit: 1 00:23:20.173 Fused Compare & Write: Not Supported 00:23:20.173 Scatter-Gather List 00:23:20.173 SGL Command Set: Supported 00:23:20.173 SGL Keyed: Not Supported 00:23:20.173 SGL Bit Bucket Descriptor: Not Supported 00:23:20.173 SGL Metadata Pointer: Not Supported 00:23:20.173 Oversized SGL: Not Supported 00:23:20.173 SGL Metadata Address: Not Supported 00:23:20.173 SGL Offset: Supported 00:23:20.173 Transport SGL Data Block: Not Supported 00:23:20.173 Replay Protected Memory Block: Not Supported 00:23:20.173 00:23:20.173 Firmware Slot Information 00:23:20.173 ========================= 00:23:20.173 Active slot: 0 00:23:20.173 00:23:20.173 00:23:20.173 Error Log 00:23:20.173 ========= 00:23:20.173 00:23:20.173 Active Namespaces 00:23:20.173 ================= 00:23:20.173 Discovery Log Page 00:23:20.173 ================== 00:23:20.173 Generation Counter: 2 00:23:20.173 Number of Records: 2 00:23:20.173 Record Format: 0 00:23:20.173 00:23:20.173 Discovery Log Entry 0 00:23:20.173 ---------------------- 00:23:20.173 Transport Type: 3 (TCP) 00:23:20.173 Address Family: 1 (IPv4) 00:23:20.173 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:20.173 Entry Flags: 00:23:20.173 Duplicate Returned Information: 0 00:23:20.173 Explicit Persistent Connection Support for Discovery: 0 00:23:20.173 Transport Requirements: 00:23:20.173 Secure Channel: Not Specified 00:23:20.173 Port ID: 1 (0x0001) 00:23:20.173 Controller ID: 65535 (0xffff) 00:23:20.173 Admin Max SQ Size: 32 00:23:20.173 Transport Service Identifier: 4420 00:23:20.173 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:20.173 Transport Address: 10.0.0.1 00:23:20.173 Discovery Log Entry 1 00:23:20.173 ---------------------- 00:23:20.173 Transport Type: 3 (TCP) 00:23:20.173 Address Family: 1 (IPv4) 00:23:20.173 Subsystem Type: 2 (NVM Subsystem) 00:23:20.173 Entry Flags: 00:23:20.173 Duplicate Returned Information: 0 00:23:20.173 Explicit Persistent Connection Support for Discovery: 0 00:23:20.173 Transport Requirements: 00:23:20.173 Secure Channel: Not Specified 00:23:20.173 Port ID: 1 (0x0001) 00:23:20.173 Controller ID: 65535 (0xffff) 00:23:20.173 Admin Max SQ Size: 32 00:23:20.173 Transport Service Identifier: 4420 00:23:20.173 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:20.173 Transport Address: 10.0.0.1 00:23:20.173 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:20.432 get_feature(0x01) failed 00:23:20.432 get_feature(0x02) failed 00:23:20.432 get_feature(0x04) failed 00:23:20.432 ===================================================== 00:23:20.432 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:20.432 ===================================================== 00:23:20.432 Controller Capabilities/Features 00:23:20.432 ================================ 00:23:20.432 Vendor ID: 0000 00:23:20.432 Subsystem Vendor ID: 0000 00:23:20.432 Serial Number: 2e1bb005fe199978c30d 00:23:20.432 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:20.432 Firmware Version: 6.8.9-20 00:23:20.432 Recommended Arb Burst: 6 00:23:20.432 IEEE OUI Identifier: 00 00 00 00:23:20.432 Multi-path I/O 00:23:20.432 May have multiple subsystem ports: Yes 00:23:20.432 May have multiple controllers: Yes 00:23:20.432 Associated with SR-IOV VF: No 00:23:20.432 Max Data Transfer Size: Unlimited 00:23:20.432 Max Number of Namespaces: 1024 00:23:20.432 Max Number of I/O Queues: 128 00:23:20.432 NVMe Specification Version (VS): 1.3 00:23:20.432 NVMe Specification Version (Identify): 1.3 00:23:20.432 Maximum Queue Entries: 1024 00:23:20.432 Contiguous Queues Required: No 00:23:20.432 Arbitration Mechanisms Supported 00:23:20.432 Weighted Round Robin: Not Supported 00:23:20.432 Vendor Specific: Not Supported 00:23:20.432 Reset Timeout: 7500 ms 00:23:20.432 Doorbell Stride: 4 bytes 00:23:20.432 NVM Subsystem Reset: Not Supported 00:23:20.432 Command Sets Supported 00:23:20.432 NVM Command Set: Supported 00:23:20.432 Boot Partition: Not Supported 00:23:20.432 Memory Page Size Minimum: 4096 bytes 00:23:20.432 Memory Page Size Maximum: 4096 bytes 00:23:20.432 Persistent Memory Region: Not Supported 00:23:20.432 Optional Asynchronous Events Supported 00:23:20.432 Namespace Attribute Notices: Supported 00:23:20.432 Firmware Activation Notices: Not Supported 00:23:20.432 ANA Change Notices: Supported 00:23:20.432 PLE Aggregate Log Change Notices: Not Supported 00:23:20.432 LBA Status Info Alert Notices: Not Supported 00:23:20.432 EGE Aggregate Log Change Notices: Not Supported 00:23:20.432 Normal NVM Subsystem Shutdown event: Not Supported 00:23:20.432 Zone Descriptor Change Notices: Not Supported 00:23:20.432 Discovery Log Change Notices: Not Supported 00:23:20.432 Controller Attributes 00:23:20.432 128-bit Host Identifier: Supported 00:23:20.432 Non-Operational Permissive Mode: Not Supported 00:23:20.432 NVM Sets: Not Supported 00:23:20.432 Read Recovery Levels: Not Supported 00:23:20.432 Endurance Groups: Not Supported 00:23:20.432 Predictable Latency Mode: Not Supported 00:23:20.432 Traffic Based Keep ALive: Supported 00:23:20.432 Namespace Granularity: Not Supported 00:23:20.432 SQ Associations: Not Supported 00:23:20.432 UUID List: Not Supported 00:23:20.432 Multi-Domain Subsystem: Not Supported 00:23:20.432 Fixed Capacity Management: Not Supported 00:23:20.432 Variable Capacity Management: Not Supported 00:23:20.432 Delete Endurance Group: Not Supported 00:23:20.432 Delete NVM Set: Not Supported 00:23:20.432 Extended LBA Formats Supported: Not Supported 00:23:20.432 Flexible Data Placement Supported: Not Supported 00:23:20.432 00:23:20.432 Controller Memory Buffer Support 00:23:20.432 ================================ 00:23:20.432 Supported: No 00:23:20.432 00:23:20.432 Persistent Memory Region Support 00:23:20.432 ================================ 00:23:20.432 Supported: No 00:23:20.432 00:23:20.432 Admin Command Set Attributes 00:23:20.432 ============================ 00:23:20.432 Security Send/Receive: Not Supported 00:23:20.432 Format NVM: Not Supported 00:23:20.432 Firmware Activate/Download: Not Supported 00:23:20.432 Namespace Management: Not Supported 00:23:20.432 Device Self-Test: Not Supported 00:23:20.432 Directives: Not Supported 00:23:20.432 NVMe-MI: Not Supported 00:23:20.432 Virtualization Management: Not Supported 00:23:20.432 Doorbell Buffer Config: Not Supported 00:23:20.432 Get LBA Status Capability: Not Supported 00:23:20.432 Command & Feature Lockdown Capability: Not Supported 00:23:20.432 Abort Command Limit: 4 00:23:20.432 Async Event Request Limit: 4 00:23:20.432 Number of Firmware Slots: N/A 00:23:20.432 Firmware Slot 1 Read-Only: N/A 00:23:20.432 Firmware Activation Without Reset: N/A 00:23:20.432 Multiple Update Detection Support: N/A 00:23:20.432 Firmware Update Granularity: No Information Provided 00:23:20.432 Per-Namespace SMART Log: Yes 00:23:20.432 Asymmetric Namespace Access Log Page: Supported 00:23:20.432 ANA Transition Time : 10 sec 00:23:20.432 00:23:20.432 Asymmetric Namespace Access Capabilities 00:23:20.432 ANA Optimized State : Supported 00:23:20.432 ANA Non-Optimized State : Supported 00:23:20.432 ANA Inaccessible State : Supported 00:23:20.432 ANA Persistent Loss State : Supported 00:23:20.432 ANA Change State : Supported 00:23:20.432 ANAGRPID is not changed : No 00:23:20.432 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:20.432 00:23:20.432 ANA Group Identifier Maximum : 128 00:23:20.432 Number of ANA Group Identifiers : 128 00:23:20.432 Max Number of Allowed Namespaces : 1024 00:23:20.432 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:20.432 Command Effects Log Page: Supported 00:23:20.432 Get Log Page Extended Data: Supported 00:23:20.432 Telemetry Log Pages: Not Supported 00:23:20.432 Persistent Event Log Pages: Not Supported 00:23:20.432 Supported Log Pages Log Page: May Support 00:23:20.432 Commands Supported & Effects Log Page: Not Supported 00:23:20.432 Feature Identifiers & Effects Log Page:May Support 00:23:20.432 NVMe-MI Commands & Effects Log Page: May Support 00:23:20.433 Data Area 4 for Telemetry Log: Not Supported 00:23:20.433 Error Log Page Entries Supported: 128 00:23:20.433 Keep Alive: Supported 00:23:20.433 Keep Alive Granularity: 1000 ms 00:23:20.433 00:23:20.433 NVM Command Set Attributes 00:23:20.433 ========================== 00:23:20.433 Submission Queue Entry Size 00:23:20.433 Max: 64 00:23:20.433 Min: 64 00:23:20.433 Completion Queue Entry Size 00:23:20.433 Max: 16 00:23:20.433 Min: 16 00:23:20.433 Number of Namespaces: 1024 00:23:20.433 Compare Command: Not Supported 00:23:20.433 Write Uncorrectable Command: Not Supported 00:23:20.433 Dataset Management Command: Supported 00:23:20.433 Write Zeroes Command: Supported 00:23:20.433 Set Features Save Field: Not Supported 00:23:20.433 Reservations: Not Supported 00:23:20.433 Timestamp: Not Supported 00:23:20.433 Copy: Not Supported 00:23:20.433 Volatile Write Cache: Present 00:23:20.433 Atomic Write Unit (Normal): 1 00:23:20.433 Atomic Write Unit (PFail): 1 00:23:20.433 Atomic Compare & Write Unit: 1 00:23:20.433 Fused Compare & Write: Not Supported 00:23:20.433 Scatter-Gather List 00:23:20.433 SGL Command Set: Supported 00:23:20.433 SGL Keyed: Not Supported 00:23:20.433 SGL Bit Bucket Descriptor: Not Supported 00:23:20.433 SGL Metadata Pointer: Not Supported 00:23:20.433 Oversized SGL: Not Supported 00:23:20.433 SGL Metadata Address: Not Supported 00:23:20.433 SGL Offset: Supported 00:23:20.433 Transport SGL Data Block: Not Supported 00:23:20.433 Replay Protected Memory Block: Not Supported 00:23:20.433 00:23:20.433 Firmware Slot Information 00:23:20.433 ========================= 00:23:20.433 Active slot: 0 00:23:20.433 00:23:20.433 Asymmetric Namespace Access 00:23:20.433 =========================== 00:23:20.433 Change Count : 0 00:23:20.433 Number of ANA Group Descriptors : 1 00:23:20.433 ANA Group Descriptor : 0 00:23:20.433 ANA Group ID : 1 00:23:20.433 Number of NSID Values : 1 00:23:20.433 Change Count : 0 00:23:20.433 ANA State : 1 00:23:20.433 Namespace Identifier : 1 00:23:20.433 00:23:20.433 Commands Supported and Effects 00:23:20.433 ============================== 00:23:20.433 Admin Commands 00:23:20.433 -------------- 00:23:20.433 Get Log Page (02h): Supported 00:23:20.433 Identify (06h): Supported 00:23:20.433 Abort (08h): Supported 00:23:20.433 Set Features (09h): Supported 00:23:20.433 Get Features (0Ah): Supported 00:23:20.433 Asynchronous Event Request (0Ch): Supported 00:23:20.433 Keep Alive (18h): Supported 00:23:20.433 I/O Commands 00:23:20.433 ------------ 00:23:20.433 Flush (00h): Supported 00:23:20.433 Write (01h): Supported LBA-Change 00:23:20.433 Read (02h): Supported 00:23:20.433 Write Zeroes (08h): Supported LBA-Change 00:23:20.433 Dataset Management (09h): Supported 00:23:20.433 00:23:20.433 Error Log 00:23:20.433 ========= 00:23:20.433 Entry: 0 00:23:20.433 Error Count: 0x3 00:23:20.433 Submission Queue Id: 0x0 00:23:20.433 Command Id: 0x5 00:23:20.433 Phase Bit: 0 00:23:20.433 Status Code: 0x2 00:23:20.433 Status Code Type: 0x0 00:23:20.433 Do Not Retry: 1 00:23:20.433 Error Location: 0x28 00:23:20.433 LBA: 0x0 00:23:20.433 Namespace: 0x0 00:23:20.433 Vendor Log Page: 0x0 00:23:20.433 ----------- 00:23:20.433 Entry: 1 00:23:20.433 Error Count: 0x2 00:23:20.433 Submission Queue Id: 0x0 00:23:20.433 Command Id: 0x5 00:23:20.433 Phase Bit: 0 00:23:20.433 Status Code: 0x2 00:23:20.433 Status Code Type: 0x0 00:23:20.433 Do Not Retry: 1 00:23:20.433 Error Location: 0x28 00:23:20.433 LBA: 0x0 00:23:20.433 Namespace: 0x0 00:23:20.433 Vendor Log Page: 0x0 00:23:20.433 ----------- 00:23:20.433 Entry: 2 00:23:20.433 Error Count: 0x1 00:23:20.433 Submission Queue Id: 0x0 00:23:20.433 Command Id: 0x4 00:23:20.433 Phase Bit: 0 00:23:20.433 Status Code: 0x2 00:23:20.433 Status Code Type: 0x0 00:23:20.433 Do Not Retry: 1 00:23:20.433 Error Location: 0x28 00:23:20.433 LBA: 0x0 00:23:20.433 Namespace: 0x0 00:23:20.433 Vendor Log Page: 0x0 00:23:20.433 00:23:20.433 Number of Queues 00:23:20.433 ================ 00:23:20.433 Number of I/O Submission Queues: 128 00:23:20.433 Number of I/O Completion Queues: 128 00:23:20.433 00:23:20.433 ZNS Specific Controller Data 00:23:20.433 ============================ 00:23:20.433 Zone Append Size Limit: 0 00:23:20.433 00:23:20.433 00:23:20.433 Active Namespaces 00:23:20.433 ================= 00:23:20.433 get_feature(0x05) failed 00:23:20.433 Namespace ID:1 00:23:20.433 Command Set Identifier: NVM (00h) 00:23:20.433 Deallocate: Supported 00:23:20.433 Deallocated/Unwritten Error: Not Supported 00:23:20.433 Deallocated Read Value: Unknown 00:23:20.433 Deallocate in Write Zeroes: Not Supported 00:23:20.433 Deallocated Guard Field: 0xFFFF 00:23:20.433 Flush: Supported 00:23:20.433 Reservation: Not Supported 00:23:20.433 Namespace Sharing Capabilities: Multiple Controllers 00:23:20.433 Size (in LBAs): 1310720 (5GiB) 00:23:20.433 Capacity (in LBAs): 1310720 (5GiB) 00:23:20.433 Utilization (in LBAs): 1310720 (5GiB) 00:23:20.433 UUID: 091e8661-d8cc-42ef-8213-8044cffbbfb4 00:23:20.433 Thin Provisioning: Not Supported 00:23:20.433 Per-NS Atomic Units: Yes 00:23:20.433 Atomic Boundary Size (Normal): 0 00:23:20.433 Atomic Boundary Size (PFail): 0 00:23:20.433 Atomic Boundary Offset: 0 00:23:20.433 NGUID/EUI64 Never Reused: No 00:23:20.433 ANA group ID: 1 00:23:20.433 Namespace Write Protected: No 00:23:20.433 Number of LBA Formats: 1 00:23:20.433 Current LBA Format: LBA Format #00 00:23:20.433 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:23:20.433 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.433 rmmod nvme_tcp 00:23:20.433 rmmod nvme_fabrics 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:20.433 13:17:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:20.433 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:20.433 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:20.433 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:20.433 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:20.433 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:20.433 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:20.692 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:20.693 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:20.693 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:20.693 13:17:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:21.266 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:21.524 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:21.524 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:21.524 00:23:21.524 real 0m3.239s 00:23:21.524 user 0m1.132s 00:23:21.524 sys 0m1.436s 00:23:21.524 13:17:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.524 ************************************ 00:23:21.524 END TEST nvmf_identify_kernel_target 00:23:21.524 13:17:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.524 ************************************ 00:23:21.524 13:17:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:21.524 13:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:21.524 13:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.524 13:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.524 ************************************ 00:23:21.524 START TEST nvmf_auth_host 00:23:21.524 ************************************ 00:23:21.524 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:21.783 * Looking for test storage... 00:23:21.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.783 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:21.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.784 --rc genhtml_branch_coverage=1 00:23:21.784 --rc genhtml_function_coverage=1 00:23:21.784 --rc genhtml_legend=1 00:23:21.784 --rc geninfo_all_blocks=1 00:23:21.784 --rc geninfo_unexecuted_blocks=1 00:23:21.784 00:23:21.784 ' 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:21.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.784 --rc genhtml_branch_coverage=1 00:23:21.784 --rc genhtml_function_coverage=1 00:23:21.784 --rc genhtml_legend=1 00:23:21.784 --rc geninfo_all_blocks=1 00:23:21.784 --rc geninfo_unexecuted_blocks=1 00:23:21.784 00:23:21.784 ' 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:21.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.784 --rc genhtml_branch_coverage=1 00:23:21.784 --rc genhtml_function_coverage=1 00:23:21.784 --rc genhtml_legend=1 00:23:21.784 --rc geninfo_all_blocks=1 00:23:21.784 --rc geninfo_unexecuted_blocks=1 00:23:21.784 00:23:21.784 ' 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:21.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.784 --rc genhtml_branch_coverage=1 00:23:21.784 --rc genhtml_function_coverage=1 00:23:21.784 --rc genhtml_legend=1 00:23:21.784 --rc geninfo_all_blocks=1 00:23:21.784 --rc geninfo_unexecuted_blocks=1 00:23:21.784 00:23:21.784 ' 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.784 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:21.784 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:21.785 Cannot find device "nvmf_init_br" 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:21.785 Cannot find device "nvmf_init_br2" 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:21.785 Cannot find device "nvmf_tgt_br" 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:21.785 Cannot find device "nvmf_tgt_br2" 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:23:21.785 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:22.043 Cannot find device "nvmf_init_br" 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:22.043 Cannot find device "nvmf_init_br2" 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:22.043 Cannot find device "nvmf_tgt_br" 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:22.043 Cannot find device "nvmf_tgt_br2" 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:22.043 Cannot find device "nvmf_br" 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:22.043 Cannot find device "nvmf_init_if" 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:22.043 Cannot find device "nvmf_init_if2" 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:22.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:22.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:22.043 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:22.044 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:22.044 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:22.302 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:22.302 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.144 ms 00:23:22.302 00:23:22.302 --- 10.0.0.3 ping statistics --- 00:23:22.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.302 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:22.302 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:22.302 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:23:22.302 00:23:22.302 --- 10.0.0.4 ping statistics --- 00:23:22.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.302 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:22.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:23:22.302 00:23:22.302 --- 10.0.0.1 ping statistics --- 00:23:22.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.302 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:22.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:23:22.302 00:23:22.302 --- 10.0.0.2 ping statistics --- 00:23:22.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.302 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=92833 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 92833 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92833 ']' 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.302 13:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=304bb5c6279e7c3fcab29a56ed7a03bb 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ug4 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 304bb5c6279e7c3fcab29a56ed7a03bb 0 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 304bb5c6279e7c3fcab29a56ed7a03bb 0 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=304bb5c6279e7c3fcab29a56ed7a03bb 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ug4 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ug4 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ug4 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ace3e3078a61d2aa2872323f6e906a507484fabc174d15ea29c2733cd5a87b6a 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1GR 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ace3e3078a61d2aa2872323f6e906a507484fabc174d15ea29c2733cd5a87b6a 3 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ace3e3078a61d2aa2872323f6e906a507484fabc174d15ea29c2733cd5a87b6a 3 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ace3e3078a61d2aa2872323f6e906a507484fabc174d15ea29c2733cd5a87b6a 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1GR 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1GR 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.1GR 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c2ce6d58c05e58b8d76ac62578301c43e6ab775b6cde1125 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XRC 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c2ce6d58c05e58b8d76ac62578301c43e6ab775b6cde1125 0 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c2ce6d58c05e58b8d76ac62578301c43e6ab775b6cde1125 0 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c2ce6d58c05e58b8d76ac62578301c43e6ab775b6cde1125 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XRC 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XRC 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.XRC 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=40670460467a54acd2f06de90d7090a68d935c12f0905a6f 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.MVA 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 40670460467a54acd2f06de90d7090a68d935c12f0905a6f 2 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 40670460467a54acd2f06de90d7090a68d935c12f0905a6f 2 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=40670460467a54acd2f06de90d7090a68d935c12f0905a6f 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.MVA 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.MVA 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.MVA 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ccfb8350d7889f3487c9e7558cb8ddf2 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dPf 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ccfb8350d7889f3487c9e7558cb8ddf2 1 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ccfb8350d7889f3487c9e7558cb8ddf2 1 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ccfb8350d7889f3487c9e7558cb8ddf2 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:22.869 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dPf 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dPf 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.dPf 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cedc8655ffe7203c1275802a9db7fb8f 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Hyz 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cedc8655ffe7203c1275802a9db7fb8f 1 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cedc8655ffe7203c1275802a9db7fb8f 1 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cedc8655ffe7203c1275802a9db7fb8f 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Hyz 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Hyz 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Hyz 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=20edb284c84e804edbc8423a0ad2429f3ee71e0b5b65fb35 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lmQ 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 20edb284c84e804edbc8423a0ad2429f3ee71e0b5b65fb35 2 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 20edb284c84e804edbc8423a0ad2429f3ee71e0b5b65fb35 2 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:23.128 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=20edb284c84e804edbc8423a0ad2429f3ee71e0b5b65fb35 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lmQ 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lmQ 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.lmQ 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=657dd26f6f3103b1597339e4be4b6a37 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.hzh 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 657dd26f6f3103b1597339e4be4b6a37 0 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 657dd26f6f3103b1597339e4be4b6a37 0 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=657dd26f6f3103b1597339e4be4b6a37 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.hzh 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.hzh 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.hzh 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=da642301333c263a68399eeaa69450945dc3f1c17d8f325186ec990e2a208909 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.zNk 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key da642301333c263a68399eeaa69450945dc3f1c17d8f325186ec990e2a208909 3 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 da642301333c263a68399eeaa69450945dc3f1c17d8f325186ec990e2a208909 3 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=da642301333c263a68399eeaa69450945dc3f1c17d8f325186ec990e2a208909 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:23.129 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:23.387 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.zNk 00:23:23.387 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.zNk 00:23:23.387 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.zNk 00:23:23.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.387 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:23.387 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92833 00:23:23.387 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92833 ']' 00:23:23.387 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.387 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.387 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.387 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.387 13:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ug4 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.1GR ]] 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1GR 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.XRC 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.MVA ]] 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MVA 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.dPf 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Hyz ]] 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Hyz 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.lmQ 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.hzh ]] 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.hzh 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.646 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.zNk 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:23.647 13:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:24.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:24.215 Waiting for block devices as requested 00:23:24.215 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:24.215 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:24.782 No valid GPT data, bailing 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:24.782 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:24.782 No valid GPT data, bailing 00:23:25.040 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:25.041 No valid GPT data, bailing 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:25.041 No valid GPT data, bailing 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -a 10.0.0.1 -t tcp -s 4420 00:23:25.041 00:23:25.041 Discovery Log Number of Records 2, Generation counter 2 00:23:25.041 =====Discovery Log Entry 0====== 00:23:25.041 trtype: tcp 00:23:25.041 adrfam: ipv4 00:23:25.041 subtype: current discovery subsystem 00:23:25.041 treq: not specified, sq flow control disable supported 00:23:25.041 portid: 1 00:23:25.041 trsvcid: 4420 00:23:25.041 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:25.041 traddr: 10.0.0.1 00:23:25.041 eflags: none 00:23:25.041 sectype: none 00:23:25.041 =====Discovery Log Entry 1====== 00:23:25.041 trtype: tcp 00:23:25.041 adrfam: ipv4 00:23:25.041 subtype: nvme subsystem 00:23:25.041 treq: not specified, sq flow control disable supported 00:23:25.041 portid: 1 00:23:25.041 trsvcid: 4420 00:23:25.041 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:25.041 traddr: 10.0.0.1 00:23:25.041 eflags: none 00:23:25.041 sectype: none 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.041 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.299 nvme0n1 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.299 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:25.558 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.559 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.559 13:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.559 nvme0n1 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.559 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.818 nvme0n1 00:23:25.818 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.818 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.818 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.819 nvme0n1 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.819 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.077 nvme0n1 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.077 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.078 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.371 nvme0n1 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.371 13:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.913 nvme0n1 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.913 nvme0n1 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.913 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.171 nvme0n1 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.171 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.172 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.429 nvme0n1 00:23:27.429 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.429 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.429 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.429 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.429 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.429 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.429 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.429 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.429 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.429 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.429 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.429 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.430 13:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.688 nvme0n1 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.688 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.253 13:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.511 nvme0n1 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.511 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.769 nvme0n1 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.769 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.770 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.112 nvme0n1 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:29.112 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.113 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.417 nvme0n1 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.417 13:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.675 nvme0n1 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.675 13:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.573 13:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.831 nvme0n1 00:23:31.831 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.832 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.090 nvme0n1 00:23:32.090 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.090 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.090 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.090 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.090 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.347 13:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.604 nvme0n1 00:23:32.604 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.604 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.604 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.604 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.604 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.604 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.604 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.604 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.604 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:32.605 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:32.862 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:32.863 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:32.863 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.863 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.120 nvme0n1 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.120 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.121 13:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.379 nvme0n1 00:23:33.379 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.379 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.379 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.379 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.379 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.638 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.230 nvme0n1 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.230 13:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.164 nvme0n1 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.164 13:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.730 nvme0n1 00:23:35.730 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.730 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.731 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.297 nvme0n1 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.297 13:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.236 nvme0n1 00:23:37.236 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.237 nvme0n1 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.237 nvme0n1 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.237 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.496 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.497 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.497 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.497 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.497 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.497 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.497 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.497 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.497 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.497 13:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.497 nvme0n1 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.497 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.755 nvme0n1 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.755 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.756 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.756 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.756 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.756 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.756 nvme0n1 00:23:37.756 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.756 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.756 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.756 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.756 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.756 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.014 nvme0n1 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.014 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.287 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.287 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.287 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:38.287 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.287 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.287 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.287 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.287 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.288 nvme0n1 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.288 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.545 nvme0n1 00:23:38.545 13:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:38.545 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.546 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.803 nvme0n1 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.803 nvme0n1 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.803 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.061 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.320 nvme0n1 00:23:39.320 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.320 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.320 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.320 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.320 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.320 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.320 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.320 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.320 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.320 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.320 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.321 13:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.578 nvme0n1 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.578 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.895 nvme0n1 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:39.895 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.896 nvme0n1 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.896 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.154 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.412 nvme0n1 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:40.412 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.413 13:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.670 nvme0n1 00:23:40.670 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.670 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.670 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.670 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.670 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.670 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.670 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.670 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.670 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.670 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.928 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.185 nvme0n1 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:41.185 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.186 13:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.750 nvme0n1 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.750 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.751 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.316 nvme0n1 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.316 13:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.574 nvme0n1 00:23:42.574 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.574 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.574 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.574 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.574 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.574 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.574 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.574 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.574 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.574 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.832 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.395 nvme0n1 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.395 13:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.964 nvme0n1 00:23:43.964 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.221 13:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.786 nvme0n1 00:23:44.786 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.786 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.786 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.786 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.786 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.786 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.787 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.351 nvme0n1 00:23:45.351 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.351 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.351 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.351 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.351 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.351 13:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.608 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.173 nvme0n1 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:46.173 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.174 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.434 nvme0n1 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.434 13:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.434 nvme0n1 00:23:46.434 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.434 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.434 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.434 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.434 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.434 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.434 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.434 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.434 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.434 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.719 nvme0n1 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:46.719 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.720 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.978 nvme0n1 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.978 nvme0n1 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.978 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.236 nvme0n1 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.236 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.494 nvme0n1 00:23:47.494 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.494 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.494 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.494 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.494 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.494 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.494 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.494 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.494 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.494 13:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.494 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.752 nvme0n1 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:47.752 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.753 nvme0n1 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.753 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.012 nvme0n1 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:48.012 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.013 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.271 nvme0n1 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.271 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.529 13:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.529 nvme0n1 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.529 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.786 nvme0n1 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:48.786 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.044 nvme0n1 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.044 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.301 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:49.301 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:49.301 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.302 nvme0n1 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.302 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:49.559 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.560 13:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.818 nvme0n1 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:49.818 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.819 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.384 nvme0n1 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.384 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:50.385 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.385 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:50.385 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:50.385 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:50.385 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.385 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.385 13:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.642 nvme0n1 00:23:50.642 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.642 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.642 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.642 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.642 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.643 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.209 nvme0n1 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:51.209 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.210 13:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.469 nvme0n1 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:51.469 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA0YmI1YzYyNzllN2MzZmNhYjI5YTU2ZWQ3YTAzYmLCPUwX: 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: ]] 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNlM2UzMDc4YTYxZDJhYTI4NzIzMjNmNmU5MDZhNTA3NDg0ZmFiYzE3NGQxNWVhMjljMjczM2NkNWE4N2I2YSsz0I4=: 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.470 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.403 nvme0n1 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.403 13:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.970 nvme0n1 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.970 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.971 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.971 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.971 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.971 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.971 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.971 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.971 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.971 13:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.535 nvme0n1 00:23:53.535 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.535 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.535 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.535 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.535 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.535 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBlZGIyODRjODRlODA0ZWRiYzg0MjNhMGFkMjQyOWYzZWU3MWUwYjViNjVmYjM1Qd8T1Q==: 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: ]] 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU3ZGQyNmY2ZjMxMDNiMTU5NzMzOWU0YmU0YjZhMzcRuRkG: 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.793 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.359 nvme0n1 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGE2NDIzMDEzMzNjMjYzYTY4Mzk5ZWVhYTY5NDUwOTQ1ZGMzZjFjMTdkOGYzMjUxODZlYzk5MGUyYTIwODkwOdngkr8=: 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.359 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.925 nvme0n1 00:23:54.925 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.925 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.925 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.925 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.925 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.925 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.925 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.925 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.925 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.925 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.925 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.925 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.184 2024/12/06 13:17:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:55.184 request: 00:23:55.184 { 00:23:55.184 "method": "bdev_nvme_attach_controller", 00:23:55.184 "params": { 00:23:55.184 "name": "nvme0", 00:23:55.184 "trtype": "tcp", 00:23:55.184 "traddr": "10.0.0.1", 00:23:55.184 "adrfam": "ipv4", 00:23:55.184 "trsvcid": "4420", 00:23:55.184 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:55.184 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:55.184 "prchk_reftag": false, 00:23:55.184 "prchk_guard": false, 00:23:55.184 "hdgst": false, 00:23:55.184 "ddgst": false, 00:23:55.184 "allow_unrecognized_csi": false 00:23:55.184 } 00:23:55.184 } 00:23:55.184 Got JSON-RPC error response 00:23:55.184 GoRPCClient: error on JSON-RPC call 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:55.184 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.185 2024/12/06 13:17:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:55.185 request: 00:23:55.185 { 00:23:55.185 "method": "bdev_nvme_attach_controller", 00:23:55.185 "params": { 00:23:55.185 "name": "nvme0", 00:23:55.185 "trtype": "tcp", 00:23:55.185 "traddr": "10.0.0.1", 00:23:55.185 "adrfam": "ipv4", 00:23:55.185 "trsvcid": "4420", 00:23:55.185 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:55.185 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:55.185 "prchk_reftag": false, 00:23:55.185 "prchk_guard": false, 00:23:55.185 "hdgst": false, 00:23:55.185 "ddgst": false, 00:23:55.185 "dhchap_key": "key2", 00:23:55.185 "allow_unrecognized_csi": false 00:23:55.185 } 00:23:55.185 } 00:23:55.185 Got JSON-RPC error response 00:23:55.185 GoRPCClient: error on JSON-RPC call 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.185 2024/12/06 13:17:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:55.185 request: 00:23:55.185 { 00:23:55.185 "method": "bdev_nvme_attach_controller", 00:23:55.185 "params": { 00:23:55.185 "name": "nvme0", 00:23:55.185 "trtype": "tcp", 00:23:55.185 "traddr": "10.0.0.1", 00:23:55.185 "adrfam": "ipv4", 00:23:55.185 "trsvcid": "4420", 00:23:55.185 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:55.185 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:55.185 "prchk_reftag": false, 00:23:55.185 "prchk_guard": false, 00:23:55.185 "hdgst": false, 00:23:55.185 "ddgst": false, 00:23:55.185 "dhchap_key": "key1", 00:23:55.185 "dhchap_ctrlr_key": "ckey2", 00:23:55.185 "allow_unrecognized_csi": false 00:23:55.185 } 00:23:55.185 } 00:23:55.185 Got JSON-RPC error response 00:23:55.185 GoRPCClient: error on JSON-RPC call 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.185 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.444 nvme0n1 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.444 13:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.444 2024/12/06 13:17:44 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-5 Msg=Input/output error 00:23:55.444 request: 00:23:55.444 { 00:23:55.444 "method": "bdev_nvme_set_keys", 00:23:55.444 "params": { 00:23:55.444 "name": "nvme0", 00:23:55.444 "dhchap_key": "key1", 00:23:55.444 "dhchap_ctrlr_key": "ckey2" 00:23:55.444 } 00:23:55.444 } 00:23:55.444 Got JSON-RPC error response 00:23:55.444 GoRPCClient: error on JSON-RPC call 00:23:55.444 13:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:55.444 13:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:55.444 13:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:55.444 13:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:55.444 13:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:55.444 13:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.444 13:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.444 13:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.444 13:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:55.444 13:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.444 13:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:55.444 13:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJjZTZkNThjMDVlNThiOGQ3NmFjNjI1NzgzMDFjNDNlNmFiNzc1YjZjZGUxMTI1IZqvww==: 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: ]] 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDA2NzA0NjA0NjdhNTRhY2QyZjA2ZGU5MGQ3MDkwYTY4ZDkzNWMxMmYwOTA1YTZm+hEcVw==: 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.820 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.821 nvme0n1 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NmYjgzNTBkNzg4OWYzNDg3YzllNzU1OGNiOGRkZjLNnUKn: 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: ]] 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VkYzg2NTVmZmU3MjAzYzEyNzU4MDJhOWRiN2ZiOGYv41Oq: 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.821 2024/12/06 13:17:45 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:23:56.821 request: 00:23:56.821 { 00:23:56.821 "method": "bdev_nvme_set_keys", 00:23:56.821 "params": { 00:23:56.821 "name": "nvme0", 00:23:56.821 "dhchap_key": "key2", 00:23:56.821 "dhchap_ctrlr_key": "ckey1" 00:23:56.821 } 00:23:56.821 } 00:23:56.821 Got JSON-RPC error response 00:23:56.821 GoRPCClient: error on JSON-RPC call 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:23:56.821 13:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:57.754 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:58.012 rmmod nvme_tcp 00:23:58.012 rmmod nvme_fabrics 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 92833 ']' 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 92833 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 92833 ']' 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 92833 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92833 00:23:58.012 killing process with pid 92833 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92833' 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 92833 00:23:58.012 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 92833 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.270 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:23:58.271 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:58.271 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:58.271 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:58.271 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:58.271 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:23:58.529 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:58.529 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:58.529 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:58.529 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:58.529 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:58.529 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:58.529 13:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:59.094 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:59.352 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:59.352 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:59.352 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ug4 /tmp/spdk.key-null.XRC /tmp/spdk.key-sha256.dPf /tmp/spdk.key-sha384.lmQ /tmp/spdk.key-sha512.zNk /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:23:59.352 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:59.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:59.608 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:59.608 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:59.608 00:23:59.608 real 0m38.081s 00:23:59.608 user 0m34.124s 00:23:59.608 sys 0m3.859s 00:23:59.608 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:59.608 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.608 ************************************ 00:23:59.608 END TEST nvmf_auth_host 00:23:59.608 ************************************ 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.865 ************************************ 00:23:59.865 START TEST nvmf_digest 00:23:59.865 ************************************ 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:59.865 * Looking for test storage... 00:23:59.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:59.865 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:59.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.865 --rc genhtml_branch_coverage=1 00:23:59.866 --rc genhtml_function_coverage=1 00:23:59.866 --rc genhtml_legend=1 00:23:59.866 --rc geninfo_all_blocks=1 00:23:59.866 --rc geninfo_unexecuted_blocks=1 00:23:59.866 00:23:59.866 ' 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:59.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.866 --rc genhtml_branch_coverage=1 00:23:59.866 --rc genhtml_function_coverage=1 00:23:59.866 --rc genhtml_legend=1 00:23:59.866 --rc geninfo_all_blocks=1 00:23:59.866 --rc geninfo_unexecuted_blocks=1 00:23:59.866 00:23:59.866 ' 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:59.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.866 --rc genhtml_branch_coverage=1 00:23:59.866 --rc genhtml_function_coverage=1 00:23:59.866 --rc genhtml_legend=1 00:23:59.866 --rc geninfo_all_blocks=1 00:23:59.866 --rc geninfo_unexecuted_blocks=1 00:23:59.866 00:23:59.866 ' 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:59.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.866 --rc genhtml_branch_coverage=1 00:23:59.866 --rc genhtml_function_coverage=1 00:23:59.866 --rc genhtml_legend=1 00:23:59.866 --rc geninfo_all_blocks=1 00:23:59.866 --rc geninfo_unexecuted_blocks=1 00:23:59.866 00:23:59.866 ' 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:59.866 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:59.866 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:00.124 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:00.125 Cannot find device "nvmf_init_br" 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:00.125 Cannot find device "nvmf_init_br2" 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:00.125 Cannot find device "nvmf_tgt_br" 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:00.125 Cannot find device "nvmf_tgt_br2" 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:00.125 Cannot find device "nvmf_init_br" 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:00.125 Cannot find device "nvmf_init_br2" 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:00.125 Cannot find device "nvmf_tgt_br" 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:00.125 Cannot find device "nvmf_tgt_br2" 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:00.125 Cannot find device "nvmf_br" 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:00.125 Cannot find device "nvmf_init_if" 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:00.125 Cannot find device "nvmf_init_if2" 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:00.125 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:00.384 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:00.384 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:24:00.384 00:24:00.384 --- 10.0.0.3 ping statistics --- 00:24:00.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.384 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:00.384 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:00.384 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:24:00.384 00:24:00.384 --- 10.0.0.4 ping statistics --- 00:24:00.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.384 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:00.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:24:00.384 00:24:00.384 --- 10.0.0.1 ping statistics --- 00:24:00.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.384 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:00.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:24:00.384 00:24:00.384 --- 10.0.0.2 ping statistics --- 00:24:00.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.384 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:00.384 ************************************ 00:24:00.384 START TEST nvmf_digest_clean 00:24:00.384 ************************************ 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.384 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.385 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=94510 00:24:00.385 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:00.385 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 94510 00:24:00.385 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94510 ']' 00:24:00.385 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.385 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.385 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.385 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.385 13:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.385 [2024-12-06 13:17:48.967713] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:00.385 [2024-12-06 13:17:48.967820] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.643 [2024-12-06 13:17:49.121777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.643 [2024-12-06 13:17:49.189991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.643 [2024-12-06 13:17:49.190053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.643 [2024-12-06 13:17:49.190068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.643 [2024-12-06 13:17:49.190078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.643 [2024-12-06 13:17:49.190087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.643 [2024-12-06 13:17:49.190567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.643 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.643 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:00.643 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:00.643 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:00.643 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.643 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.643 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:00.643 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:00.643 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:00.643 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.643 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.901 null0 00:24:00.901 [2024-12-06 13:17:49.387386] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.901 [2024-12-06 13:17:49.411565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94547 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94547 /var/tmp/bperf.sock 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94547 ']' 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.901 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.901 [2024-12-06 13:17:49.480906] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:00.901 [2024-12-06 13:17:49.481002] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94547 ] 00:24:01.158 [2024-12-06 13:17:49.633428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.158 [2024-12-06 13:17:49.702999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.158 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.158 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:01.158 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:01.158 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:01.158 13:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:01.722 13:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.722 13:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.980 nvme0n1 00:24:01.980 13:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:01.980 13:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:02.238 Running I/O for 2 seconds... 00:24:04.107 18310.00 IOPS, 71.52 MiB/s [2024-12-06T13:17:52.747Z] 18139.00 IOPS, 70.86 MiB/s 00:24:04.107 Latency(us) 00:24:04.107 [2024-12-06T13:17:52.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.107 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:04.107 nvme0n1 : 2.00 18162.72 70.95 0.00 0.00 7039.63 3351.27 12034.79 00:24:04.107 [2024-12-06T13:17:52.747Z] =================================================================================================================== 00:24:04.107 [2024-12-06T13:17:52.747Z] Total : 18162.72 70.95 0.00 0.00 7039.63 3351.27 12034.79 00:24:04.107 { 00:24:04.107 "results": [ 00:24:04.107 { 00:24:04.107 "job": "nvme0n1", 00:24:04.107 "core_mask": "0x2", 00:24:04.107 "workload": "randread", 00:24:04.107 "status": "finished", 00:24:04.107 "queue_depth": 128, 00:24:04.107 "io_size": 4096, 00:24:04.107 "runtime": 2.004436, 00:24:04.107 "iops": 18162.71509791283, 00:24:04.107 "mibps": 70.948105851222, 00:24:04.107 "io_failed": 0, 00:24:04.107 "io_timeout": 0, 00:24:04.107 "avg_latency_us": 7039.625882247182, 00:24:04.107 "min_latency_us": 3351.2727272727275, 00:24:04.107 "max_latency_us": 12034.792727272727 00:24:04.107 } 00:24:04.107 ], 00:24:04.107 "core_count": 1 00:24:04.107 } 00:24:04.107 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:04.107 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:04.107 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:04.107 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:04.107 | select(.opcode=="crc32c") 00:24:04.107 | "\(.module_name) \(.executed)"' 00:24:04.107 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:04.365 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:04.365 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:04.365 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:04.365 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:04.365 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94547 00:24:04.365 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94547 ']' 00:24:04.365 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94547 00:24:04.365 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:04.365 13:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.365 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94547 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94547' 00:24:04.623 killing process with pid 94547 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94547 00:24:04.623 Received shutdown signal, test time was about 2.000000 seconds 00:24:04.623 00:24:04.623 Latency(us) 00:24:04.623 [2024-12-06T13:17:53.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.623 [2024-12-06T13:17:53.263Z] =================================================================================================================== 00:24:04.623 [2024-12-06T13:17:53.263Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94547 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94624 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94624 /var/tmp/bperf.sock 00:24:04.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94624 ']' 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.623 13:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:04.881 [2024-12-06 13:17:53.287524] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:04.881 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:04.881 Zero copy mechanism will not be used. 00:24:04.881 [2024-12-06 13:17:53.287815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94624 ] 00:24:04.881 [2024-12-06 13:17:53.430800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.881 [2024-12-06 13:17:53.492927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.895 13:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.895 13:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:05.895 13:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:05.895 13:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:05.895 13:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:06.153 13:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:06.153 13:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:06.720 nvme0n1 00:24:06.720 13:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:06.720 13:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:06.720 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:06.720 Zero copy mechanism will not be used. 00:24:06.720 Running I/O for 2 seconds... 00:24:09.031 8034.00 IOPS, 1004.25 MiB/s [2024-12-06T13:17:57.671Z] 8013.50 IOPS, 1001.69 MiB/s 00:24:09.031 Latency(us) 00:24:09.031 [2024-12-06T13:17:57.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.031 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:09.031 nvme0n1 : 2.00 8008.72 1001.09 0.00 0.00 1994.05 618.12 7387.69 00:24:09.031 [2024-12-06T13:17:57.671Z] =================================================================================================================== 00:24:09.031 [2024-12-06T13:17:57.671Z] Total : 8008.72 1001.09 0.00 0.00 1994.05 618.12 7387.69 00:24:09.031 { 00:24:09.031 "results": [ 00:24:09.031 { 00:24:09.031 "job": "nvme0n1", 00:24:09.031 "core_mask": "0x2", 00:24:09.031 "workload": "randread", 00:24:09.031 "status": "finished", 00:24:09.031 "queue_depth": 16, 00:24:09.031 "io_size": 131072, 00:24:09.031 "runtime": 2.003192, 00:24:09.031 "iops": 8008.718085934848, 00:24:09.031 "mibps": 1001.089760741856, 00:24:09.031 "io_failed": 0, 00:24:09.031 "io_timeout": 0, 00:24:09.031 "avg_latency_us": 1994.048819252804, 00:24:09.031 "min_latency_us": 618.1236363636364, 00:24:09.031 "max_latency_us": 7387.694545454546 00:24:09.031 } 00:24:09.031 ], 00:24:09.031 "core_count": 1 00:24:09.031 } 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:09.031 | select(.opcode=="crc32c") 00:24:09.031 | "\(.module_name) \(.executed)"' 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94624 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94624 ']' 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94624 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94624 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.031 killing process with pid 94624 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94624' 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94624 00:24:09.031 Received shutdown signal, test time was about 2.000000 seconds 00:24:09.031 00:24:09.031 Latency(us) 00:24:09.031 [2024-12-06T13:17:57.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.031 [2024-12-06T13:17:57.671Z] =================================================================================================================== 00:24:09.031 [2024-12-06T13:17:57.671Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:09.031 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94624 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94714 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94714 /var/tmp/bperf.sock 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94714 ']' 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.289 13:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:09.289 [2024-12-06 13:17:57.886127] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:09.289 [2024-12-06 13:17:57.886232] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94714 ] 00:24:09.547 [2024-12-06 13:17:58.034848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.547 [2024-12-06 13:17:58.100250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.483 13:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.483 13:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:10.483 13:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:10.483 13:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:10.483 13:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:10.741 13:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:10.741 13:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:11.306 nvme0n1 00:24:11.306 13:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:11.306 13:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:11.306 Running I/O for 2 seconds... 00:24:13.614 21926.00 IOPS, 85.65 MiB/s [2024-12-06T13:18:02.254Z] 22077.00 IOPS, 86.24 MiB/s 00:24:13.614 Latency(us) 00:24:13.614 [2024-12-06T13:18:02.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.614 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:13.614 nvme0n1 : 2.00 22103.34 86.34 0.00 0.00 5784.97 2546.97 15966.95 00:24:13.614 [2024-12-06T13:18:02.254Z] =================================================================================================================== 00:24:13.614 [2024-12-06T13:18:02.254Z] Total : 22103.34 86.34 0.00 0.00 5784.97 2546.97 15966.95 00:24:13.614 { 00:24:13.614 "results": [ 00:24:13.614 { 00:24:13.614 "job": "nvme0n1", 00:24:13.614 "core_mask": "0x2", 00:24:13.614 "workload": "randwrite", 00:24:13.614 "status": "finished", 00:24:13.614 "queue_depth": 128, 00:24:13.614 "io_size": 4096, 00:24:13.614 "runtime": 2.003408, 00:24:13.614 "iops": 22103.33591559982, 00:24:13.614 "mibps": 86.34115592031179, 00:24:13.614 "io_failed": 0, 00:24:13.614 "io_timeout": 0, 00:24:13.614 "avg_latency_us": 5784.965168855804, 00:24:13.614 "min_latency_us": 2546.9672727272728, 00:24:13.614 "max_latency_us": 15966.952727272726 00:24:13.614 } 00:24:13.614 ], 00:24:13.614 "core_count": 1 00:24:13.614 } 00:24:13.614 13:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:13.614 13:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:13.614 13:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:13.614 13:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:13.614 13:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:13.614 | select(.opcode=="crc32c") 00:24:13.615 | "\(.module_name) \(.executed)"' 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94714 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94714 ']' 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94714 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94714 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:13.615 killing process with pid 94714 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94714' 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94714 00:24:13.615 Received shutdown signal, test time was about 2.000000 seconds 00:24:13.615 00:24:13.615 Latency(us) 00:24:13.615 [2024-12-06T13:18:02.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.615 [2024-12-06T13:18:02.255Z] =================================================================================================================== 00:24:13.615 [2024-12-06T13:18:02.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.615 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94714 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94811 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94811 /var/tmp/bperf.sock 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94811 ']' 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.874 13:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:13.874 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:13.874 Zero copy mechanism will not be used. 00:24:13.874 [2024-12-06 13:18:02.438815] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:13.874 [2024-12-06 13:18:02.438913] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94811 ] 00:24:14.220 [2024-12-06 13:18:02.580794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.220 [2024-12-06 13:18:02.652692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.156 13:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.156 13:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:15.156 13:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:15.156 13:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:15.156 13:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:15.414 13:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:15.414 13:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:15.674 nvme0n1 00:24:15.674 13:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:15.674 13:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:15.932 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:15.932 Zero copy mechanism will not be used. 00:24:15.932 Running I/O for 2 seconds... 00:24:17.802 6352.00 IOPS, 794.00 MiB/s [2024-12-06T13:18:06.442Z] 6378.00 IOPS, 797.25 MiB/s 00:24:17.802 Latency(us) 00:24:17.802 [2024-12-06T13:18:06.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.802 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:17.802 nvme0n1 : 2.00 6376.73 797.09 0.00 0.00 2503.43 1534.14 11379.43 00:24:17.802 [2024-12-06T13:18:06.442Z] =================================================================================================================== 00:24:17.802 [2024-12-06T13:18:06.442Z] Total : 6376.73 797.09 0.00 0.00 2503.43 1534.14 11379.43 00:24:17.802 { 00:24:17.802 "results": [ 00:24:17.802 { 00:24:17.802 "job": "nvme0n1", 00:24:17.802 "core_mask": "0x2", 00:24:17.802 "workload": "randwrite", 00:24:17.802 "status": "finished", 00:24:17.802 "queue_depth": 16, 00:24:17.802 "io_size": 131072, 00:24:17.802 "runtime": 2.004006, 00:24:17.802 "iops": 6376.727414987779, 00:24:17.802 "mibps": 797.0909268734724, 00:24:17.802 "io_failed": 0, 00:24:17.802 "io_timeout": 0, 00:24:17.802 "avg_latency_us": 2503.430536177962, 00:24:17.802 "min_latency_us": 1534.138181818182, 00:24:17.802 "max_latency_us": 11379.432727272728 00:24:17.802 } 00:24:17.802 ], 00:24:17.803 "core_count": 1 00:24:17.803 } 00:24:17.803 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:17.803 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:17.803 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:17.803 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:17.803 | select(.opcode=="crc32c") 00:24:17.803 | "\(.module_name) \(.executed)"' 00:24:17.803 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:18.060 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:18.060 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:18.060 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:18.061 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:18.061 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94811 00:24:18.061 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94811 ']' 00:24:18.061 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94811 00:24:18.061 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:18.061 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.061 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94811 00:24:18.061 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:18.061 killing process with pid 94811 00:24:18.061 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:18.061 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94811' 00:24:18.061 Received shutdown signal, test time was about 2.000000 seconds 00:24:18.061 00:24:18.061 Latency(us) 00:24:18.061 [2024-12-06T13:18:06.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.061 [2024-12-06T13:18:06.701Z] =================================================================================================================== 00:24:18.061 [2024-12-06T13:18:06.701Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.061 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94811 00:24:18.061 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94811 00:24:18.318 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94510 00:24:18.318 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94510 ']' 00:24:18.318 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94510 00:24:18.318 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:18.318 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.318 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94510 00:24:18.318 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:18.318 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:18.318 killing process with pid 94510 00:24:18.318 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94510' 00:24:18.318 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94510 00:24:18.318 13:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94510 00:24:18.576 00:24:18.576 real 0m18.227s 00:24:18.576 user 0m36.399s 00:24:18.576 sys 0m4.449s 00:24:18.576 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.576 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:18.576 ************************************ 00:24:18.576 END TEST nvmf_digest_clean 00:24:18.576 ************************************ 00:24:18.576 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:18.576 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:18.576 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:18.577 ************************************ 00:24:18.577 START TEST nvmf_digest_error 00:24:18.577 ************************************ 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=94925 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 94925 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94925 ']' 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.577 13:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.835 [2024-12-06 13:18:07.244665] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:18.835 [2024-12-06 13:18:07.244785] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.835 [2024-12-06 13:18:07.394181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.835 [2024-12-06 13:18:07.457286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.835 [2024-12-06 13:18:07.457347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.835 [2024-12-06 13:18:07.457359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.835 [2024-12-06 13:18:07.457368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.835 [2024-12-06 13:18:07.457375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.835 [2024-12-06 13:18:07.457813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:19.766 [2024-12-06 13:18:08.322454] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.766 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.024 null0 00:24:20.024 [2024-12-06 13:18:08.440662] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.024 [2024-12-06 13:18:08.464814] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94975 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94975 /var/tmp/bperf.sock 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94975 ']' 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:20.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.024 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.024 [2024-12-06 13:18:08.531045] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:20.024 [2024-12-06 13:18:08.531174] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94975 ] 00:24:20.282 [2024-12-06 13:18:08.678593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.282 [2024-12-06 13:18:08.738084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.282 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.282 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:20.282 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:20.282 13:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:20.540 13:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:20.540 13:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.540 13:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.797 13:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.797 13:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.797 13:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:21.055 nvme0n1 00:24:21.055 13:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:21.055 13:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.055 13:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:21.055 13:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.055 13:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:21.055 13:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:21.313 Running I/O for 2 seconds... 00:24:21.313 [2024-12-06 13:18:09.781606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.313 [2024-12-06 13:18:09.781681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.314 [2024-12-06 13:18:09.781697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.314 [2024-12-06 13:18:09.794952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.314 [2024-12-06 13:18:09.795007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.314 [2024-12-06 13:18:09.795023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.314 [2024-12-06 13:18:09.807721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.314 [2024-12-06 13:18:09.807776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.314 [2024-12-06 13:18:09.807792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.314 [2024-12-06 13:18:09.822331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.314 [2024-12-06 13:18:09.822388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.314 [2024-12-06 13:18:09.822420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.314 [2024-12-06 13:18:09.837231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.314 [2024-12-06 13:18:09.837498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.314 [2024-12-06 13:18:09.837532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.314 [2024-12-06 13:18:09.852790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.314 [2024-12-06 13:18:09.853044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.314 [2024-12-06 13:18:09.853064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.314 [2024-12-06 13:18:09.867482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.314 [2024-12-06 13:18:09.867535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.314 [2024-12-06 13:18:09.867550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.314 [2024-12-06 13:18:09.881590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.314 [2024-12-06 13:18:09.881642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.314 [2024-12-06 13:18:09.881658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.314 [2024-12-06 13:18:09.895985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.314 [2024-12-06 13:18:09.896050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.314 [2024-12-06 13:18:09.896069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.314 [2024-12-06 13:18:09.909682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.314 [2024-12-06 13:18:09.909747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.314 [2024-12-06 13:18:09.909763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.314 [2024-12-06 13:18:09.925053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.314 [2024-12-06 13:18:09.925151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.314 [2024-12-06 13:18:09.925182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.314 [2024-12-06 13:18:09.938154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.314 [2024-12-06 13:18:09.938211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.314 [2024-12-06 13:18:09.938226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.572 [2024-12-06 13:18:09.953739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.572 [2024-12-06 13:18:09.953808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.572 [2024-12-06 13:18:09.953824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.572 [2024-12-06 13:18:09.968003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.572 [2024-12-06 13:18:09.968052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.572 [2024-12-06 13:18:09.968078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.572 [2024-12-06 13:18:09.982382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.572 [2024-12-06 13:18:09.982442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.572 [2024-12-06 13:18:09.982496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.572 [2024-12-06 13:18:09.997503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.572 [2024-12-06 13:18:09.997559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.572 [2024-12-06 13:18:09.997589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.572 [2024-12-06 13:18:10.012447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.572 [2024-12-06 13:18:10.012533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.572 [2024-12-06 13:18:10.012550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.572 [2024-12-06 13:18:10.026575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.572 [2024-12-06 13:18:10.026835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.572 [2024-12-06 13:18:10.026855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.572 [2024-12-06 13:18:10.041089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.572 [2024-12-06 13:18:10.041167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.572 [2024-12-06 13:18:10.041200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-12-06 13:18:10.055460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.573 [2024-12-06 13:18:10.055543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-12-06 13:18:10.055570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-12-06 13:18:10.069861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.573 [2024-12-06 13:18:10.070069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-12-06 13:18:10.070089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-12-06 13:18:10.084528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.573 [2024-12-06 13:18:10.084591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-12-06 13:18:10.084610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-12-06 13:18:10.099088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.573 [2024-12-06 13:18:10.099148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-12-06 13:18:10.099163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-12-06 13:18:10.114344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.573 [2024-12-06 13:18:10.114398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-12-06 13:18:10.114428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-12-06 13:18:10.128141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.573 [2024-12-06 13:18:10.128362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-12-06 13:18:10.128383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-12-06 13:18:10.144550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.573 [2024-12-06 13:18:10.144609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-12-06 13:18:10.144625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-12-06 13:18:10.158741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.573 [2024-12-06 13:18:10.158939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-12-06 13:18:10.158959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-12-06 13:18:10.173354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.573 [2024-12-06 13:18:10.173406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-12-06 13:18:10.173437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-12-06 13:18:10.185734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.573 [2024-12-06 13:18:10.185784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-12-06 13:18:10.185811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-12-06 13:18:10.201391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.573 [2024-12-06 13:18:10.201440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-12-06 13:18:10.201501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.216454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.216697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.832 [2024-12-06 13:18:10.216719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.229863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.229922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.832 [2024-12-06 13:18:10.229942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.245616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.245668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.832 [2024-12-06 13:18:10.245683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.261198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.261255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.832 [2024-12-06 13:18:10.261300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.274696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.274755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.832 [2024-12-06 13:18:10.274770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.290992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.291053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.832 [2024-12-06 13:18:10.291069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.303281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.303324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.832 [2024-12-06 13:18:10.303353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.318206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.318257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.832 [2024-12-06 13:18:10.318283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.332590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.332636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.832 [2024-12-06 13:18:10.332666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.347197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.347260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.832 [2024-12-06 13:18:10.347279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.361928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.361996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.832 [2024-12-06 13:18:10.362021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.376926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.377157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.832 [2024-12-06 13:18:10.377192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.389783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.389857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.832 [2024-12-06 13:18:10.389877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.832 [2024-12-06 13:18:10.404171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.832 [2024-12-06 13:18:10.404402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.833 [2024-12-06 13:18:10.404422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.833 [2024-12-06 13:18:10.419072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.833 [2024-12-06 13:18:10.419347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.833 [2024-12-06 13:18:10.419385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.833 [2024-12-06 13:18:10.435164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.833 [2024-12-06 13:18:10.435361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.833 [2024-12-06 13:18:10.435381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.833 [2024-12-06 13:18:10.450523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.833 [2024-12-06 13:18:10.450607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.833 [2024-12-06 13:18:10.450638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.833 [2024-12-06 13:18:10.465033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:21.833 [2024-12-06 13:18:10.465085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.833 [2024-12-06 13:18:10.465100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.090 [2024-12-06 13:18:10.479755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.090 [2024-12-06 13:18:10.479815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.090 [2024-12-06 13:18:10.479829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.090 [2024-12-06 13:18:10.494521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.090 [2024-12-06 13:18:10.494582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.090 [2024-12-06 13:18:10.494597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.090 [2024-12-06 13:18:10.509562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.090 [2024-12-06 13:18:10.509751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.090 [2024-12-06 13:18:10.509771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.090 [2024-12-06 13:18:10.524331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.090 [2024-12-06 13:18:10.524383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.090 [2024-12-06 13:18:10.524415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.090 [2024-12-06 13:18:10.538916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.090 [2024-12-06 13:18:10.538982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.090 [2024-12-06 13:18:10.538999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.090 [2024-12-06 13:18:10.553118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.090 [2024-12-06 13:18:10.553316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.090 [2024-12-06 13:18:10.553336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.090 [2024-12-06 13:18:10.567275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.090 [2024-12-06 13:18:10.567335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.090 [2024-12-06 13:18:10.567352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.090 [2024-12-06 13:18:10.579831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.090 [2024-12-06 13:18:10.580010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.090 [2024-12-06 13:18:10.580030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.090 [2024-12-06 13:18:10.594889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.090 [2024-12-06 13:18:10.595004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.090 [2024-12-06 13:18:10.595022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.090 [2024-12-06 13:18:10.611896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.090 [2024-12-06 13:18:10.611960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.090 [2024-12-06 13:18:10.611976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.090 [2024-12-06 13:18:10.625998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.090 [2024-12-06 13:18:10.626197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.090 [2024-12-06 13:18:10.626217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.090 [2024-12-06 13:18:10.637657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.090 [2024-12-06 13:18:10.637706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.091 [2024-12-06 13:18:10.637721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.091 [2024-12-06 13:18:10.652895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.091 [2024-12-06 13:18:10.653076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.091 [2024-12-06 13:18:10.653096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.091 [2024-12-06 13:18:10.668192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.091 [2024-12-06 13:18:10.668391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.091 [2024-12-06 13:18:10.668428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.091 [2024-12-06 13:18:10.683756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.091 [2024-12-06 13:18:10.684008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.091 [2024-12-06 13:18:10.684198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.091 [2024-12-06 13:18:10.699701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.091 [2024-12-06 13:18:10.699906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.091 [2024-12-06 13:18:10.700037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.091 [2024-12-06 13:18:10.714916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.091 [2024-12-06 13:18:10.715181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.091 [2024-12-06 13:18:10.715321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.350 [2024-12-06 13:18:10.730273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.350 [2024-12-06 13:18:10.730552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.350 [2024-12-06 13:18:10.730832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.350 [2024-12-06 13:18:10.745808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.350 [2024-12-06 13:18:10.746013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.350 [2024-12-06 13:18:10.746151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.350 17270.00 IOPS, 67.46 MiB/s [2024-12-06T13:18:10.990Z] [2024-12-06 13:18:10.761373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.350 [2024-12-06 13:18:10.761588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.350 [2024-12-06 13:18:10.761730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.350 [2024-12-06 13:18:10.774361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.350 [2024-12-06 13:18:10.774613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.350 [2024-12-06 13:18:10.774752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.350 [2024-12-06 13:18:10.790012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.350 [2024-12-06 13:18:10.790213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.350 [2024-12-06 13:18:10.790342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.350 [2024-12-06 13:18:10.804821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.350 [2024-12-06 13:18:10.805079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.350 [2024-12-06 13:18:10.805252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.350 [2024-12-06 13:18:10.820339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.350 [2024-12-06 13:18:10.820386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.350 [2024-12-06 13:18:10.820417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.350 [2024-12-06 13:18:10.834576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.351 [2024-12-06 13:18:10.834630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.351 [2024-12-06 13:18:10.834645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.351 [2024-12-06 13:18:10.849691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.351 [2024-12-06 13:18:10.849922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.351 [2024-12-06 13:18:10.849952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.351 [2024-12-06 13:18:10.864730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.351 [2024-12-06 13:18:10.864978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.351 [2024-12-06 13:18:10.864998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.351 [2024-12-06 13:18:10.877492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.351 [2024-12-06 13:18:10.877539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.351 [2024-12-06 13:18:10.877554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.351 [2024-12-06 13:18:10.892906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.351 [2024-12-06 13:18:10.892959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.351 [2024-12-06 13:18:10.892974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.351 [2024-12-06 13:18:10.907109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.351 [2024-12-06 13:18:10.907169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.351 [2024-12-06 13:18:10.907199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.351 [2024-12-06 13:18:10.921473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.351 [2024-12-06 13:18:10.921524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.351 [2024-12-06 13:18:10.921539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.351 [2024-12-06 13:18:10.935546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.351 [2024-12-06 13:18:10.935601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.351 [2024-12-06 13:18:10.935617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.351 [2024-12-06 13:18:10.950800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.351 [2024-12-06 13:18:10.951051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.351 [2024-12-06 13:18:10.951086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.351 [2024-12-06 13:18:10.965157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.351 [2024-12-06 13:18:10.965206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.351 [2024-12-06 13:18:10.965221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.351 [2024-12-06 13:18:10.980355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.351 [2024-12-06 13:18:10.980546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.351 [2024-12-06 13:18:10.980567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:10.994767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:10.994836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:10.994856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.010231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.010285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.010302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.024437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.024529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.024545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.038300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.038350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.038365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.052493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.052540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.052555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.067160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.067363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.067383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.081768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.081828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.081843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.096521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.096577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.096592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.110210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.110257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.110272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.123250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.123423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.123442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.138295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.138342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.138358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.152528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.152594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.152613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.167160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.167341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.167361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.181461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.181509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.181524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.197705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.197768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.197784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.214422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.214500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.214532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.610 [2024-12-06 13:18:11.231729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.610 [2024-12-06 13:18:11.231788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.610 [2024-12-06 13:18:11.231808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.870 [2024-12-06 13:18:11.246899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.870 [2024-12-06 13:18:11.247079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.870 [2024-12-06 13:18:11.247099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.870 [2024-12-06 13:18:11.264394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.870 [2024-12-06 13:18:11.264462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.870 [2024-12-06 13:18:11.264479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.870 [2024-12-06 13:18:11.280771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.870 [2024-12-06 13:18:11.280852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.870 [2024-12-06 13:18:11.280867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.870 [2024-12-06 13:18:11.298246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.870 [2024-12-06 13:18:11.298441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.870 [2024-12-06 13:18:11.298475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.870 [2024-12-06 13:18:11.312523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.870 [2024-12-06 13:18:11.312581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.870 [2024-12-06 13:18:11.312612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.870 [2024-12-06 13:18:11.325409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.870 [2024-12-06 13:18:11.325494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.870 [2024-12-06 13:18:11.325510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.870 [2024-12-06 13:18:11.340297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.870 [2024-12-06 13:18:11.340348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.870 [2024-12-06 13:18:11.340364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.870 [2024-12-06 13:18:11.353363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.870 [2024-12-06 13:18:11.353414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.870 [2024-12-06 13:18:11.353429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.870 [2024-12-06 13:18:11.369174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.870 [2024-12-06 13:18:11.369254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.870 [2024-12-06 13:18:11.369284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.870 [2024-12-06 13:18:11.383562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.870 [2024-12-06 13:18:11.383608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.870 [2024-12-06 13:18:11.383623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.870 [2024-12-06 13:18:11.397511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.870 [2024-12-06 13:18:11.397571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.870 [2024-12-06 13:18:11.397587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.870 [2024-12-06 13:18:11.412470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.870 [2024-12-06 13:18:11.412539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.870 [2024-12-06 13:18:11.412554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.870 [2024-12-06 13:18:11.426958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.870 [2024-12-06 13:18:11.427018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.871 [2024-12-06 13:18:11.427033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.871 [2024-12-06 13:18:11.440944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.871 [2024-12-06 13:18:11.441147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.871 [2024-12-06 13:18:11.441178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.871 [2024-12-06 13:18:11.455695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.871 [2024-12-06 13:18:11.455918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.871 [2024-12-06 13:18:11.455942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.871 [2024-12-06 13:18:11.470257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.871 [2024-12-06 13:18:11.470309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.871 [2024-12-06 13:18:11.470324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.871 [2024-12-06 13:18:11.483801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.871 [2024-12-06 13:18:11.483854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.871 [2024-12-06 13:18:11.483869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.871 [2024-12-06 13:18:11.496276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:22.871 [2024-12-06 13:18:11.496320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.871 [2024-12-06 13:18:11.496351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.131 [2024-12-06 13:18:11.510887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.131 [2024-12-06 13:18:11.510935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.131 [2024-12-06 13:18:11.510949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.131 [2024-12-06 13:18:11.524190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.131 [2024-12-06 13:18:11.524234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.131 [2024-12-06 13:18:11.524249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.131 [2024-12-06 13:18:11.539106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.131 [2024-12-06 13:18:11.539157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.131 [2024-12-06 13:18:11.539173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.131 [2024-12-06 13:18:11.553095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.131 [2024-12-06 13:18:11.553170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.131 [2024-12-06 13:18:11.553201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.131 [2024-12-06 13:18:11.567286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.131 [2024-12-06 13:18:11.567335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.131 [2024-12-06 13:18:11.567350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.131 [2024-12-06 13:18:11.581210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.132 [2024-12-06 13:18:11.581267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.132 [2024-12-06 13:18:11.581298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.132 [2024-12-06 13:18:11.595676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.132 [2024-12-06 13:18:11.595737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.132 [2024-12-06 13:18:11.595756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.132 [2024-12-06 13:18:11.610611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.132 [2024-12-06 13:18:11.610655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.132 [2024-12-06 13:18:11.610669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.132 [2024-12-06 13:18:11.625019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.132 [2024-12-06 13:18:11.625069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.132 [2024-12-06 13:18:11.625085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.132 [2024-12-06 13:18:11.637229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.132 [2024-12-06 13:18:11.637279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.132 [2024-12-06 13:18:11.637294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.132 [2024-12-06 13:18:11.652225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.132 [2024-12-06 13:18:11.652291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.132 [2024-12-06 13:18:11.652318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.132 [2024-12-06 13:18:11.667598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.132 [2024-12-06 13:18:11.667652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.132 [2024-12-06 13:18:11.667667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.132 [2024-12-06 13:18:11.682183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.132 [2024-12-06 13:18:11.682408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.132 [2024-12-06 13:18:11.682443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.132 [2024-12-06 13:18:11.696517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.132 [2024-12-06 13:18:11.696579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.132 [2024-12-06 13:18:11.696595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.132 [2024-12-06 13:18:11.712149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.132 [2024-12-06 13:18:11.712220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.132 [2024-12-06 13:18:11.712250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.132 [2024-12-06 13:18:11.726441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.132 [2024-12-06 13:18:11.726521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.132 [2024-12-06 13:18:11.726537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.132 [2024-12-06 13:18:11.738413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.132 [2024-12-06 13:18:11.738614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.132 [2024-12-06 13:18:11.738636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.132 [2024-12-06 13:18:11.754136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1efd2d0) 00:24:23.132 [2024-12-06 13:18:11.754310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.132 [2024-12-06 13:18:11.754330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.132 17344.00 IOPS, 67.75 MiB/s 00:24:23.132 Latency(us) 00:24:23.132 [2024-12-06T13:18:11.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.132 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:23.132 nvme0n1 : 2.01 17372.58 67.86 0.00 0.00 7358.64 4021.53 20971.52 00:24:23.132 [2024-12-06T13:18:11.772Z] =================================================================================================================== 00:24:23.132 [2024-12-06T13:18:11.772Z] Total : 17372.58 67.86 0.00 0.00 7358.64 4021.53 20971.52 00:24:23.132 { 00:24:23.132 "results": [ 00:24:23.132 { 00:24:23.132 "job": "nvme0n1", 00:24:23.132 "core_mask": "0x2", 00:24:23.132 "workload": "randread", 00:24:23.132 "status": "finished", 00:24:23.132 "queue_depth": 128, 00:24:23.132 "io_size": 4096, 00:24:23.132 "runtime": 2.006035, 00:24:23.132 "iops": 17372.578245145272, 00:24:23.132 "mibps": 67.86163377009872, 00:24:23.132 "io_failed": 0, 00:24:23.132 "io_timeout": 0, 00:24:23.132 "avg_latency_us": 7358.638855980175, 00:24:23.132 "min_latency_us": 4021.5272727272727, 00:24:23.132 "max_latency_us": 20971.52 00:24:23.132 } 00:24:23.132 ], 00:24:23.132 "core_count": 1 00:24:23.132 } 00:24:23.391 13:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:23.391 13:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:23.391 | .driver_specific 00:24:23.391 | .nvme_error 00:24:23.391 | .status_code 00:24:23.391 | .command_transient_transport_error' 00:24:23.391 13:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:23.391 13:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:23.650 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:24:23.650 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94975 00:24:23.650 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94975 ']' 00:24:23.650 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94975 00:24:23.650 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:23.650 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.650 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94975 00:24:23.650 killing process with pid 94975 00:24:23.650 Received shutdown signal, test time was about 2.000000 seconds 00:24:23.650 00:24:23.650 Latency(us) 00:24:23.650 [2024-12-06T13:18:12.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.650 [2024-12-06T13:18:12.290Z] =================================================================================================================== 00:24:23.650 [2024-12-06T13:18:12.290Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.650 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:23.650 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:23.650 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94975' 00:24:23.650 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94975 00:24:23.650 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94975 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95046 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95046 /var/tmp/bperf.sock 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95046 ']' 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:23.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.909 13:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.909 [2024-12-06 13:18:12.418863] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:23.909 [2024-12-06 13:18:12.419226] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:24:23.909 Zero copy mechanism will not be used. 00:24:23.909 llocations --file-prefix=spdk_pid95046 ] 00:24:24.167 [2024-12-06 13:18:12.565004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.167 [2024-12-06 13:18:12.631791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.105 13:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.105 13:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:25.105 13:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:25.105 13:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:25.365 13:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:25.365 13:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.365 13:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.365 13:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.365 13:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:25.365 13:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:25.623 nvme0n1 00:24:25.623 13:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:25.623 13:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.623 13:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.623 13:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.623 13:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:25.623 13:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:25.883 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:25.883 Zero copy mechanism will not be used. 00:24:25.883 Running I/O for 2 seconds... 00:24:25.883 [2024-12-06 13:18:14.299697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.299757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.299773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.303108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.303148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.303193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.306743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.306784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.306798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.311379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.311422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.311436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.316622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.316667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.316682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.320105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.320152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.320166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.324595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.324636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.324651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.329588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.329629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.329642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.334624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.334666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.334680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.337482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.337538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.337552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.342179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.342224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.342239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.346718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.346764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.346778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.350891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.350933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.350946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.355057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.355097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.355126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.359231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.359288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.359301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.363246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.363285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.363313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.366955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.367012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.367025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.371149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.371205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.371234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.374990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.375032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.375045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.380003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.380046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.380060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.383104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.383144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.383157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.387461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.387499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.387512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.392005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.392047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.883 [2024-12-06 13:18:14.392060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.883 [2024-12-06 13:18:14.396365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.883 [2024-12-06 13:18:14.396406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.396419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.400236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.400276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.400290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.404264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.404305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.404318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.408218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.408258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.408272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.411868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.411908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.411922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.415763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.415802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.415816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.419247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.419296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.419310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.423852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.423892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.423905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.428445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.428497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.428511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.433466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.433506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.433518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.438297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.438350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.438363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.441746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.441784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.441805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.446065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.446105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.446118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.451076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.451118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.451131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.456238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.456280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.456293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.459850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.459891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.459904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.464270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.464310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.464324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.469383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.469423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.469436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.474128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.474168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.474182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.478663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.478703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.478716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.483206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.483247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.483260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.487702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.487742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.487756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.490599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.490637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.490651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.495269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.495309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.495322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.500182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.500225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.500238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.504902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.504943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.504957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.509703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.509745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.884 [2024-12-06 13:18:14.509759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:25.884 [2024-12-06 13:18:14.513221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.884 [2024-12-06 13:18:14.513261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.885 [2024-12-06 13:18:14.513275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:25.885 [2024-12-06 13:18:14.517215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:25.885 [2024-12-06 13:18:14.517256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.885 [2024-12-06 13:18:14.517269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.522172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.522215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.522229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.527109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.527151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.527164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.532399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.532441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.532469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.535579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.535618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.535631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.539671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.539712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.539725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.544343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.544399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.544412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.547998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.548038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.548051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.552165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.552205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.552219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.556673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.556713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.556726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.560017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.560059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.560072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.564978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.565018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.565032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.569951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.569992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.570005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.573384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.573424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.573437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.577714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.577756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.145 [2024-12-06 13:18:14.577770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.145 [2024-12-06 13:18:14.582827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.145 [2024-12-06 13:18:14.582869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.582882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.586983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.587023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.587035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.591425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.591494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.591509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.594935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.594976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.594989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.600025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.600066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.600080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.603221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.603273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.603286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.607754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.607796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.607810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.612742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.612786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.612800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.617691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.617735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.617749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.621263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.621304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.621317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.625432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.625486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.625501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.629958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.630014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.630028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.634661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.634703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.634716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.637745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.637784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.637806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.642012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.642053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.642066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.645905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.645945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.645958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.649000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.649039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.649052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.653605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.653645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.653658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.658758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.658798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.658811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.663409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.663462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.663477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.666863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.666902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.666915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.671098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.671138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.671151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.675552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.675590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.675603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.680406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.680446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.680472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.685497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.685536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.146 [2024-12-06 13:18:14.685550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.146 [2024-12-06 13:18:14.689028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.146 [2024-12-06 13:18:14.689066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.689079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.693129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.693170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.693184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.697971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.698010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.698023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.702804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.702845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.702858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.705509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.705546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.705559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.710389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.710428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.710442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.715292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.715332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.715345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.718762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.718800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.718812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.722870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.722909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.722922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.727847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.727887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.727900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.732152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.732190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.732203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.735298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.735350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.735363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.740234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.740274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.740288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.744397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.744480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.744494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.748689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.748727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.748740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.752062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.752099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.752128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.756418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.756497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.756511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.760857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.760896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.760909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.763920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.763990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.764004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.768723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.768761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.768774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.773441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.773501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.773532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.147 [2024-12-06 13:18:14.778624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.147 [2024-12-06 13:18:14.778662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.147 [2024-12-06 13:18:14.778675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.408 [2024-12-06 13:18:14.783890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.783929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.783942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.787640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.787676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.787689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.792095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.792134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.792147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.797045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.797085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.797099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.802068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.802108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.802121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.806415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.806492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.806506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.809720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.809757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.809770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.814141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.814210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.814224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.818801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.818839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.818852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.822041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.822081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.822093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.826411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.826486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.826499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.831383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.831418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.831447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.835672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.835711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.835724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.838968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.839021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.839050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.843431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.843508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.843538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.847494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.847574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.847587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.851640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.851679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.851692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.856140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.856194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.856206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.859126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.859164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.859177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.863304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.863343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.863356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.867424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.867498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.867512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.870802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.870839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.870852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.875132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.875184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.875196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.880176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.880229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.880257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.883088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.883139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.883167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.887604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.887642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.409 [2024-12-06 13:18:14.887655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.409 [2024-12-06 13:18:14.891594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.409 [2024-12-06 13:18:14.891631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.891644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.895837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.895877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.895890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.899402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.899463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.899478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.904608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.904645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.904658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.909642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.909680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.909694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.913113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.913165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.913193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.917587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.917626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.917639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.922618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.922657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.922670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.926859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.926899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.926912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.929680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.929717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.929730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.934642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.934681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.934694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.939450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.939520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.939534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.942605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.942643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.942655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.947792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.947832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.947845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.952462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.952527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.952541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.955678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.955716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.955728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.961014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.961054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.961067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.966367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.966421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.966435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.971007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.971052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.971064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.975722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.975762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.975775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.979010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.979049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.979062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.983882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.983920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.983934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.988391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.988442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.988481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.992985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.993040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.993052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:14.996160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:14.996212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:14.996241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:15.001032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:15.001085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:15.001113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:15.006086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:15.006126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.410 [2024-12-06 13:18:15.006139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.410 [2024-12-06 13:18:15.010402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.410 [2024-12-06 13:18:15.010441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.411 [2024-12-06 13:18:15.010468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.411 [2024-12-06 13:18:15.013912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.411 [2024-12-06 13:18:15.013952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.411 [2024-12-06 13:18:15.013965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.411 [2024-12-06 13:18:15.017811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.411 [2024-12-06 13:18:15.017856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.411 [2024-12-06 13:18:15.017869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.411 [2024-12-06 13:18:15.022319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.411 [2024-12-06 13:18:15.022374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.411 [2024-12-06 13:18:15.022388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.411 [2024-12-06 13:18:15.026787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.411 [2024-12-06 13:18:15.026829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.411 [2024-12-06 13:18:15.026842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.411 [2024-12-06 13:18:15.029774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.411 [2024-12-06 13:18:15.029821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.411 [2024-12-06 13:18:15.029835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.411 [2024-12-06 13:18:15.034864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.411 [2024-12-06 13:18:15.034903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.411 [2024-12-06 13:18:15.034917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.411 [2024-12-06 13:18:15.040054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.411 [2024-12-06 13:18:15.040094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.411 [2024-12-06 13:18:15.040108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.045171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.045210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.045224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.048602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.048640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.048653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.052909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.052948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.052961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.057848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.057888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.057901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.062779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.062818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.062831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.065898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.065943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.065955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.070659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.070696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.070710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.075436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.075486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.075500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.080274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.080313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.080326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.082964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.083001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.083013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.087587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.087625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.087639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.092466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.092516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.092530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.097411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.097474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.097488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.100803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.100841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.100853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.105233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.105272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.105285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.109832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.109872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.109885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.114719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.114759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.114772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.117366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.117402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.117415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.122397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.122464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.122478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.127588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.127627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.127640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.130664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.130699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.130712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.134876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.134915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.134928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.139832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.139871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.139884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.143932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.143998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.144027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.147443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.147492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.731 [2024-12-06 13:18:15.147506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.731 [2024-12-06 13:18:15.151774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.731 [2024-12-06 13:18:15.151813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.151826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.155938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.155978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.155992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.159617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.159655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.159668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.163090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.163129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.163142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.166706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.166744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.166756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.169953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.169994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.170007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.174023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.174061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.174074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.178284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.178323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.178336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.181697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.181735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.181748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.186128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.186168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.186181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.191349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.191402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.191431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.196681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.196734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.196764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.201132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.201185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.201214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.203990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.204043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.204056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.208912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.208980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.209009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.212465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.212527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.212556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.216894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.216962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.216991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.222016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.222080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.222100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.226597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.226636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.226649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.229307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.229344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.229357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.233973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.234012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.234026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.237548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.237587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.237600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.241015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.241053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.241066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.245311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.245349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.245362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.250293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.250333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.250346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.253849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.253887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.253900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.258140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.258180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.732 [2024-12-06 13:18:15.258193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.732 [2024-12-06 13:18:15.262486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.732 [2024-12-06 13:18:15.262525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.262539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.266053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.266092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.266105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.269582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.269621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.269634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.273373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.273412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.273425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.276900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.276951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.276964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.281165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.281205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.281217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.285934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.285973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.285986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.288974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.289012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.289025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.293515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.293553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.293566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.733 7261.00 IOPS, 907.62 MiB/s [2024-12-06T13:18:15.373Z] [2024-12-06 13:18:15.299639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.299680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.299694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.303152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.303190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.303203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.307328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.307369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.307382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.312219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.312258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.312272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.315620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.315660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.315673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.319601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.319640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.319653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.323735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.323774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.323787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.326801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.326841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.326854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.331112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.331151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.331165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.336087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.336127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.336140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.341069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.341109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.341122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.345262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.345301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.345314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.348711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.348749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.348763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.353095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.353136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.353149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.357821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.357861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.357875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.362852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.362893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.362907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.733 [2024-12-06 13:18:15.366345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.733 [2024-12-06 13:18:15.366384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.733 [2024-12-06 13:18:15.366397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.992 [2024-12-06 13:18:15.370863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.992 [2024-12-06 13:18:15.370906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.992 [2024-12-06 13:18:15.370920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.992 [2024-12-06 13:18:15.374733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.992 [2024-12-06 13:18:15.374774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.992 [2024-12-06 13:18:15.374787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.992 [2024-12-06 13:18:15.378110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.992 [2024-12-06 13:18:15.378150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.992 [2024-12-06 13:18:15.378163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.992 [2024-12-06 13:18:15.382856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.992 [2024-12-06 13:18:15.382901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.992 [2024-12-06 13:18:15.382914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.386544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.386599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.386612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.390193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.390232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.390245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.395126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.395166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.395180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.400423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.400476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.400491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.405402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.405476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.405491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.410388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.410443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.410486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.413988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.414027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.414040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.418424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.418476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.418490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.423656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.423696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.423709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.428308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.428361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.428390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.432701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.432740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.432754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.435800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.435839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.435852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.440366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.440406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.440420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.444246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.444286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.444299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.447532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.447572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.447586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.451661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.451701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.451714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.456193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.456235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.456248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.460597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.460638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.460651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.465317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.465361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.465375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.468109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.468148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.468161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.473058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.473104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.473118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.478340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.478387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.478400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.483205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.483251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.483265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.487789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.487833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.487847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.492106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.492147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.492160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.496716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.496756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.496770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.499609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.499646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-12-06 13:18:15.499658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.993 [2024-12-06 13:18:15.504764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.993 [2024-12-06 13:18:15.504803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.504816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.509465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.509504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.509517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.514522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.514560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.514573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.518072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.518113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.518125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.522476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.522512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.522525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.526854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.526894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.526906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.530269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.530308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.530321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.534369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.534408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.534421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.537671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.537709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.537721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.541694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.541732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.541745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.545974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.546013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.546026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.549687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.549726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.549739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.553639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.553677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.553690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.558240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.558280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.558293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.560870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.560905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.560918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.565807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.565845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.565859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.569994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.570033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.570046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.573052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.573089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.573102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.577664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.577704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.577717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.580928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.580966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.580979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.585099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.585140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.585153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.590256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.590295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.590323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.595508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.595561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.595575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.600290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.600331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.600344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.603411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.603461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.603475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.608265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.608302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.608316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.613296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.613333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-12-06 13:18:15.613346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.994 [2024-12-06 13:18:15.616445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.994 [2024-12-06 13:18:15.616507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.995 [2024-12-06 13:18:15.616520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.995 [2024-12-06 13:18:15.620381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.995 [2024-12-06 13:18:15.620419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.995 [2024-12-06 13:18:15.620433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.995 [2024-12-06 13:18:15.623978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.995 [2024-12-06 13:18:15.624015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.995 [2024-12-06 13:18:15.624028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.995 [2024-12-06 13:18:15.628226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:26.995 [2024-12-06 13:18:15.628280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.995 [2024-12-06 13:18:15.628299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.632375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.632429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.632443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.636320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.636373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.636401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.640746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.640801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.640814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.644705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.644744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.644757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.648983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.649023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.649036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.652963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.653016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.653045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.656546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.656584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.656597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.660842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.660882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.660894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.665213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.665251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.665264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.668713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.668766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.668795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.672845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.672885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.672898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.677840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.677879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.677893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.682889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.682929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.682941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.687763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.687803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.687816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.690586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.690623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.254 [2024-12-06 13:18:15.690636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.254 [2024-12-06 13:18:15.695832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.254 [2024-12-06 13:18:15.695886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.695899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.700657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.700708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.700736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.704197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.704248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.704276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.708511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.708578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.708591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.712928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.712982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.713010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.716182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.716235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.716263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.720856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.720908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.720936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.725768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.725847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.725861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.729292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.729345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.729358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.733773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.733837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.733850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.738623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.738662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.738675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.742982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.743034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.743061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.746131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.746171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.746184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.750802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.750857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.750870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.755615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.755668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.755697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.759929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.759980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.760008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.762979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.763030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.763058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.767422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.767500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.767531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.772057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.772124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.772152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.777140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.777191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.777219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.780591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.780643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.780670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.784857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.784908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.784936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.789728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.789780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.789833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.794281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.794347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.794375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.797213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.797263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.797291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.255 [2024-12-06 13:18:15.801145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.255 [2024-12-06 13:18:15.801196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.255 [2024-12-06 13:18:15.801223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.805553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.805621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.805634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.810013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.810051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.810065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.814844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.814916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.814930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.819792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.819844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.819873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.822982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.823036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.823049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.827943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.827998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.828012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.832431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.832479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.832493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.835903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.835944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.835958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.840410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.840471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.840487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.844094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.844136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.844149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.848093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.848144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.848158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.852405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.852463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.852479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.856203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.856250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.856264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.860196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.860244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.860258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.864563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.864611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.864625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.868161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.868201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.868214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.872378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.872420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.872433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.876569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.876612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.876625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.880953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.880994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.881007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.884864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.884903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.884916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.256 [2024-12-06 13:18:15.889260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.256 [2024-12-06 13:18:15.889307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.256 [2024-12-06 13:18:15.889320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.516 [2024-12-06 13:18:15.893216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.516 [2024-12-06 13:18:15.893263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.516 [2024-12-06 13:18:15.893277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.516 [2024-12-06 13:18:15.897031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.516 [2024-12-06 13:18:15.897071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.516 [2024-12-06 13:18:15.897084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.516 [2024-12-06 13:18:15.900516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.516 [2024-12-06 13:18:15.900554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.516 [2024-12-06 13:18:15.900568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.516 [2024-12-06 13:18:15.904894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.516 [2024-12-06 13:18:15.904934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.904947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.908550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.908589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.908602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.913199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.913249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.913267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.917286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.917326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.917339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.921496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.921537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.921550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.924930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.924973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.924986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.929681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.929722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.929735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.933869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.933919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.933934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.937874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.937915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.937928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.942363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.942401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.942415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.945593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.945632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.945645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.950144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.950212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.950230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.954379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.954427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.954446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.958325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.958376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.958397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.962224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.962277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.962292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.966443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.966496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.966509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.971495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.971536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.971549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.976286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.976326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.976339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.981226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.981269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.981283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.985772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.985822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.985835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.989236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.989276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.989289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.993695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.993735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.993748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:15.999111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:15.999156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:15.999169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:16.004177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:16.004217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:16.004231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:16.007441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:16.007513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:16.007528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:16.011948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:16.011988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:16.012002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:16.017153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.517 [2024-12-06 13:18:16.017194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.517 [2024-12-06 13:18:16.017208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.517 [2024-12-06 13:18:16.020941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.020980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.020993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.025137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.025181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.025195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.030056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.030096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.030109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.035145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.035187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.035200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.039993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.040035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.040048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.043053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.043092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.043105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.048079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.048132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.048145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.053582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.053634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.053647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.058895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.058941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.058955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.062535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.062574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.062587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.066691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.066733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.066747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.071994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.072035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.072048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.077617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.077675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.077688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.082605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.082645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.082659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.085325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.085364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.085378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.090633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.090673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.090686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.095908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.095953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.095967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.100690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.100729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.100743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.103301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.103353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.103383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.108485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.108525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.108540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.113042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.113082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.113095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.116235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.116276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.116289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.121404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.121461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.121475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.126579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.126635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.126649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.130116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.130156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.130169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.134605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.134653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.134670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.139795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.518 [2024-12-06 13:18:16.139848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.518 [2024-12-06 13:18:16.139866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.518 [2024-12-06 13:18:16.144929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.519 [2024-12-06 13:18:16.144970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.519 [2024-12-06 13:18:16.144983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.519 [2024-12-06 13:18:16.148490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.519 [2024-12-06 13:18:16.148529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.519 [2024-12-06 13:18:16.148542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.778 [2024-12-06 13:18:16.153127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.778 [2024-12-06 13:18:16.153168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.778 [2024-12-06 13:18:16.153183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.778 [2024-12-06 13:18:16.158491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.778 [2024-12-06 13:18:16.158530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.778 [2024-12-06 13:18:16.158543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.778 [2024-12-06 13:18:16.163845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.778 [2024-12-06 13:18:16.163891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.778 [2024-12-06 13:18:16.163905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.778 [2024-12-06 13:18:16.169111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.778 [2024-12-06 13:18:16.169156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.778 [2024-12-06 13:18:16.169170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.778 [2024-12-06 13:18:16.173727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.778 [2024-12-06 13:18:16.173767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.778 [2024-12-06 13:18:16.173781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.778 [2024-12-06 13:18:16.176446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.778 [2024-12-06 13:18:16.176497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.778 [2024-12-06 13:18:16.176510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.778 [2024-12-06 13:18:16.181517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.778 [2024-12-06 13:18:16.181556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.778 [2024-12-06 13:18:16.181569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.778 [2024-12-06 13:18:16.186141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.778 [2024-12-06 13:18:16.186182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.186196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.189577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.189646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.189659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.193903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.193945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.193959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.197584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.197626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.197640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.201779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.201834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.201849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.206364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.206407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.206421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.209963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.210004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.210017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.213182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.213222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.213236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.217890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.217931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.217944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.222413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.222465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.222481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.225603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.225642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.225655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.230182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.230223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.230237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.234187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.234229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.234243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.237793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.237851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.237865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.242037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.242078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.242092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.246478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.246517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.246531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.250738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.250793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.250807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.254264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.254305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.254319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.259648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.259691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.259704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.264867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.264909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.264923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.269620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.269659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.269673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.272767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.272807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.272819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.277598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.277647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.277664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.282169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.282211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.282224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.287307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.287350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.287363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.779 [2024-12-06 13:18:16.290769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.290821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.290834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.779 7248.00 IOPS, 906.00 MiB/s [2024-12-06T13:18:16.419Z] [2024-12-06 13:18:16.296365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1983dd0) 00:24:27.779 [2024-12-06 13:18:16.296405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.779 [2024-12-06 13:18:16.296419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.779 00:24:27.779 Latency(us) 00:24:27.779 [2024-12-06T13:18:16.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.779 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:27.779 nvme0n1 : 2.00 7245.89 905.74 0.00 0.00 2203.90 588.33 8400.52 00:24:27.779 [2024-12-06T13:18:16.419Z] =================================================================================================================== 00:24:27.779 [2024-12-06T13:18:16.419Z] Total : 7245.89 905.74 0.00 0.00 2203.90 588.33 8400.52 00:24:27.779 { 00:24:27.779 "results": [ 00:24:27.779 { 00:24:27.780 "job": "nvme0n1", 00:24:27.780 "core_mask": "0x2", 00:24:27.780 "workload": "randread", 00:24:27.780 "status": "finished", 00:24:27.780 "queue_depth": 16, 00:24:27.780 "io_size": 131072, 00:24:27.780 "runtime": 2.002791, 00:24:27.780 "iops": 7245.888362789727, 00:24:27.780 "mibps": 905.7360453487158, 00:24:27.780 "io_failed": 0, 00:24:27.780 "io_timeout": 0, 00:24:27.780 "avg_latency_us": 2203.8967384985467, 00:24:27.780 "min_latency_us": 588.3345454545455, 00:24:27.780 "max_latency_us": 8400.523636363636 00:24:27.780 } 00:24:27.780 ], 00:24:27.780 "core_count": 1 00:24:27.780 } 00:24:27.780 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:27.780 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:27.780 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:27.780 | .driver_specific 00:24:27.780 | .nvme_error 00:24:27.780 | .status_code 00:24:27.780 | .command_transient_transport_error' 00:24:27.780 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:28.038 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 469 > 0 )) 00:24:28.038 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95046 00:24:28.038 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95046 ']' 00:24:28.038 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95046 00:24:28.038 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:28.038 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.038 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95046 00:24:28.297 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:28.297 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:28.297 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95046' 00:24:28.297 killing process with pid 95046 00:24:28.297 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95046 00:24:28.297 Received shutdown signal, test time was about 2.000000 seconds 00:24:28.297 00:24:28.297 Latency(us) 00:24:28.297 [2024-12-06T13:18:16.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.297 [2024-12-06T13:18:16.937Z] =================================================================================================================== 00:24:28.297 [2024-12-06T13:18:16.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.297 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95046 00:24:28.297 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:28.297 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:28.297 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:28.297 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:28.297 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:28.297 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:28.297 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95141 00:24:28.297 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95141 /var/tmp/bperf.sock 00:24:28.298 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95141 ']' 00:24:28.298 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:28.298 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.298 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:28.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:28.298 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.298 13:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.556 [2024-12-06 13:18:16.948425] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:28.556 [2024-12-06 13:18:16.948525] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95141 ] 00:24:28.556 [2024-12-06 13:18:17.095196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.556 [2024-12-06 13:18:17.160168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.816 13:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.816 13:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:28.816 13:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:28.816 13:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:29.074 13:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:29.074 13:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.074 13:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.074 13:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.074 13:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:29.074 13:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:29.641 nvme0n1 00:24:29.641 13:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:29.641 13:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.641 13:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.641 13:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.641 13:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:29.641 13:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:29.641 Running I/O for 2 seconds... 00:24:29.641 [2024-12-06 13:18:18.202619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef5378 00:24:29.641 [2024-12-06 13:18:18.203863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.641 [2024-12-06 13:18:18.203910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.641 [2024-12-06 13:18:18.217153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee3d08 00:24:29.641 [2024-12-06 13:18:18.219008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.641 [2024-12-06 13:18:18.219219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.641 [2024-12-06 13:18:18.225972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee12d8 00:24:29.641 [2024-12-06 13:18:18.226788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.641 [2024-12-06 13:18:18.226827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.641 [2024-12-06 13:18:18.240439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef2948 00:24:29.641 [2024-12-06 13:18:18.242039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.642 [2024-12-06 13:18:18.242244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.642 [2024-12-06 13:18:18.251828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efb8b8 00:24:29.642 [2024-12-06 13:18:18.253134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.642 [2024-12-06 13:18:18.253295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.642 [2024-12-06 13:18:18.265508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef5378 00:24:29.642 [2024-12-06 13:18:18.267054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.642 [2024-12-06 13:18:18.267094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.642 [2024-12-06 13:18:18.276721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef6cc8 00:24:29.642 [2024-12-06 13:18:18.278116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.642 [2024-12-06 13:18:18.278163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.901 [2024-12-06 13:18:18.287373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee27f0 00:24:29.901 [2024-12-06 13:18:18.289104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.901 [2024-12-06 13:18:18.289148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.901 [2024-12-06 13:18:18.299749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efc998 00:24:29.901 [2024-12-06 13:18:18.300939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.901 [2024-12-06 13:18:18.300979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:29.901 [2024-12-06 13:18:18.311100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef7538 00:24:29.901 [2024-12-06 13:18:18.312301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.901 [2024-12-06 13:18:18.312335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.901 [2024-12-06 13:18:18.322601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee6b70 00:24:29.901 [2024-12-06 13:18:18.323472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.901 [2024-12-06 13:18:18.323518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.901 [2024-12-06 13:18:18.333933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee1f80 00:24:29.901 [2024-12-06 13:18:18.334860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.901 [2024-12-06 13:18:18.335028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.901 [2024-12-06 13:18:18.348284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eddc00 00:24:29.901 [2024-12-06 13:18:18.349666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.901 [2024-12-06 13:18:18.349844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:29.901 [2024-12-06 13:18:18.359602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee95a0 00:24:29.901 [2024-12-06 13:18:18.360739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.901 [2024-12-06 13:18:18.360773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.901 [2024-12-06 13:18:18.371384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef9f68 00:24:29.901 [2024-12-06 13:18:18.372620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.901 [2024-12-06 13:18:18.372653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.901 [2024-12-06 13:18:18.383619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efb8b8 00:24:29.901 [2024-12-06 13:18:18.384200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.901 [2024-12-06 13:18:18.384239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.901 [2024-12-06 13:18:18.397249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef7538 00:24:29.901 [2024-12-06 13:18:18.398675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.901 [2024-12-06 13:18:18.398716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.901 [2024-12-06 13:18:18.409752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef5378 00:24:29.901 [2024-12-06 13:18:18.411583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.901 [2024-12-06 13:18:18.411623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.901 [2024-12-06 13:18:18.418430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efda78 00:24:29.901 [2024-12-06 13:18:18.419207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.902 [2024-12-06 13:18:18.419245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.902 [2024-12-06 13:18:18.432881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee95a0 00:24:29.902 [2024-12-06 13:18:18.434347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.902 [2024-12-06 13:18:18.434389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.902 [2024-12-06 13:18:18.444830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef1430 00:24:29.902 [2024-12-06 13:18:18.445875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.902 [2024-12-06 13:18:18.445915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.902 [2024-12-06 13:18:18.456334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee23b8 00:24:29.902 [2024-12-06 13:18:18.457279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.902 [2024-12-06 13:18:18.457495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.902 [2024-12-06 13:18:18.467777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eee5c8 00:24:29.902 [2024-12-06 13:18:18.468431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.902 [2024-12-06 13:18:18.468486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.902 [2024-12-06 13:18:18.481441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee8d30 00:24:29.902 [2024-12-06 13:18:18.482916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.902 [2024-12-06 13:18:18.482957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.902 [2024-12-06 13:18:18.492778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef7970 00:24:29.902 [2024-12-06 13:18:18.494305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.902 [2024-12-06 13:18:18.494347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.902 [2024-12-06 13:18:18.504383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef8618 00:24:29.902 [2024-12-06 13:18:18.505700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.902 [2024-12-06 13:18:18.505738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.902 [2024-12-06 13:18:18.516100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee8088 00:24:29.902 [2024-12-06 13:18:18.517295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.902 [2024-12-06 13:18:18.517331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.902 [2024-12-06 13:18:18.527539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef3a28 00:24:29.902 [2024-12-06 13:18:18.528439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.902 [2024-12-06 13:18:18.528499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.541910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee27f0 00:24:30.161 [2024-12-06 13:18:18.543591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.543752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.553397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efa3a0 00:24:30.161 [2024-12-06 13:18:18.554724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.554884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.565233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eeff18 00:24:30.161 [2024-12-06 13:18:18.566734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.566775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.576748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eed0b0 00:24:30.161 [2024-12-06 13:18:18.577781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.577829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.588746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efa7d8 00:24:30.161 [2024-12-06 13:18:18.589893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.589932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.601022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eee5c8 00:24:30.161 [2024-12-06 13:18:18.601655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.601694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.614899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ede8a8 00:24:30.161 [2024-12-06 13:18:18.616527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.616561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.626398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eef6a8 00:24:30.161 [2024-12-06 13:18:18.627867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.627907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.637584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efd208 00:24:30.161 [2024-12-06 13:18:18.638879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.638917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.649467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef81e0 00:24:30.161 [2024-12-06 13:18:18.650654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.650693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.664458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee88f8 00:24:30.161 [2024-12-06 13:18:18.666555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.666601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.674052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef1868 00:24:30.161 [2024-12-06 13:18:18.674967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.675003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.692123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efa3a0 00:24:30.161 [2024-12-06 13:18:18.693945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.694003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.707893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eee190 00:24:30.161 [2024-12-06 13:18:18.710095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.710151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.723463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eeea00 00:24:30.161 [2024-12-06 13:18:18.726109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.726149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:30.161 [2024-12-06 13:18:18.737782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef2948 00:24:30.161 [2024-12-06 13:18:18.739200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-12-06 13:18:18.739381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:30.162 [2024-12-06 13:18:18.752675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee6fa8 00:24:30.162 [2024-12-06 13:18:18.753939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-12-06 13:18:18.753980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:30.162 [2024-12-06 13:18:18.768797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef9b30 00:24:30.162 [2024-12-06 13:18:18.769995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-12-06 13:18:18.770035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:30.162 [2024-12-06 13:18:18.785796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eed0b0 00:24:30.162 [2024-12-06 13:18:18.787413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-12-06 13:18:18.787628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:18.802332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eef6a8 00:24:30.421 [2024-12-06 13:18:18.803702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:18.803742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:18.822318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eee190 00:24:30.421 [2024-12-06 13:18:18.824828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:18.824885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:18.837662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eed920 00:24:30.421 [2024-12-06 13:18:18.840225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:18.840267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:18.851003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016edfdc0 00:24:30.421 [2024-12-06 13:18:18.852646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:18.852684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:18.866357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eeaab8 00:24:30.421 [2024-12-06 13:18:18.868032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:18.868070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:18.881644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee9e10 00:24:30.421 [2024-12-06 13:18:18.883333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:18.883551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:18.897024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efb8b8 00:24:30.421 [2024-12-06 13:18:18.898805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:18.898838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:18.914385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee4de8 00:24:30.421 [2024-12-06 13:18:18.916833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:18.916873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:18.925223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eeea00 00:24:30.421 [2024-12-06 13:18:18.926431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:18.926481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:18.945067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee38d0 00:24:30.421 [2024-12-06 13:18:18.947573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:18.947612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:18.956086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eff3c8 00:24:30.421 [2024-12-06 13:18:18.957456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:18.957498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:18.972394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eeea00 00:24:30.421 [2024-12-06 13:18:18.973643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:18.973681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:18.989743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eeea00 00:24:30.421 [2024-12-06 13:18:18.991866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:18.991908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:19.005148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee5220 00:24:30.421 [2024-12-06 13:18:19.007129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:19.007313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:19.017446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee73e0 00:24:30.421 [2024-12-06 13:18:19.018609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:19.018662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:19.032782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee0a68 00:24:30.421 [2024-12-06 13:18:19.033912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:19.033949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:30.421 [2024-12-06 13:18:19.052728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee5220 00:24:30.421 [2024-12-06 13:18:19.055165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.421 [2024-12-06 13:18:19.055206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.067851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef6458 00:24:30.685 [2024-12-06 13:18:19.070241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.070287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.078795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef5be8 00:24:30.685 [2024-12-06 13:18:19.080085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.080116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.098995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee0a68 00:24:30.685 [2024-12-06 13:18:19.101315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.101354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.109735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee0a68 00:24:30.685 [2024-12-06 13:18:19.111065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.111222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.125963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efef90 00:24:30.685 [2024-12-06 13:18:19.127052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.127095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.140291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee9e10 00:24:30.685 [2024-12-06 13:18:19.141106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.141144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.155773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef3a28 00:24:30.685 [2024-12-06 13:18:19.157863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.157902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.164949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee0630 00:24:30.685 [2024-12-06 13:18:19.165954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.165992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.179636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee5ec8 00:24:30.685 [2024-12-06 13:18:19.181206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.181244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:30.685 18727.00 IOPS, 73.15 MiB/s [2024-12-06T13:18:19.325Z] [2024-12-06 13:18:19.196114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee73e0 00:24:30.685 [2024-12-06 13:18:19.198092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.198141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.204974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eee190 00:24:30.685 [2024-12-06 13:18:19.205966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.206056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.219570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee3060 00:24:30.685 [2024-12-06 13:18:19.221084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.221114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.232001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eec408 00:24:30.685 [2024-12-06 13:18:19.233251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.233431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.244136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef6cc8 00:24:30.685 [2024-12-06 13:18:19.245383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.245547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.256160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee3060 00:24:30.685 [2024-12-06 13:18:19.257451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.257653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.268530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eec408 00:24:30.685 [2024-12-06 13:18:19.269867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.269912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.280371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee2c28 00:24:30.685 [2024-12-06 13:18:19.281884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.281925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.293214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef20d8 00:24:30.685 [2024-12-06 13:18:19.294596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.685 [2024-12-06 13:18:19.294649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:30.685 [2024-12-06 13:18:19.305345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eea248 00:24:30.686 [2024-12-06 13:18:19.306641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.686 [2024-12-06 13:18:19.306677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:30.945 [2024-12-06 13:18:19.320002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee6b70 00:24:30.945 [2024-12-06 13:18:19.322042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.945 [2024-12-06 13:18:19.322079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:30.945 [2024-12-06 13:18:19.328963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eeff18 00:24:30.945 [2024-12-06 13:18:19.329976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.945 [2024-12-06 13:18:19.330022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:30.945 [2024-12-06 13:18:19.343825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef8a50 00:24:30.945 [2024-12-06 13:18:19.345584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.945 [2024-12-06 13:18:19.345630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:30.945 [2024-12-06 13:18:19.356624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee0ea0 00:24:30.945 [2024-12-06 13:18:19.358396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.945 [2024-12-06 13:18:19.358432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:30.945 [2024-12-06 13:18:19.368547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef6890 00:24:30.945 [2024-12-06 13:18:19.370205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.945 [2024-12-06 13:18:19.370242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:30.945 [2024-12-06 13:18:19.380392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eed4e8 00:24:30.945 [2024-12-06 13:18:19.381719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.945 [2024-12-06 13:18:19.381887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:30.945 [2024-12-06 13:18:19.392017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eec840 00:24:30.945 [2024-12-06 13:18:19.393163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.945 [2024-12-06 13:18:19.393202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:30.945 [2024-12-06 13:18:19.403850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee73e0 00:24:30.945 [2024-12-06 13:18:19.405027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.405097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.418547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efb8b8 00:24:30.946 [2024-12-06 13:18:19.420516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.420687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.427583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eeaef0 00:24:30.946 [2024-12-06 13:18:19.428680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.428853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.442958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee4de8 00:24:30.946 [2024-12-06 13:18:19.444729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.444929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.454882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee4578 00:24:30.946 [2024-12-06 13:18:19.456155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.456355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.466989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef31b8 00:24:30.946 [2024-12-06 13:18:19.468528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.468726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.482227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eee190 00:24:30.946 [2024-12-06 13:18:19.484392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.484596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.491416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef0350 00:24:30.946 [2024-12-06 13:18:19.492601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.492783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.506760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef9b30 00:24:30.946 [2024-12-06 13:18:19.508742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.508913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.518823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efe2e8 00:24:30.946 [2024-12-06 13:18:19.520182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.520392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.532070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efc128 00:24:30.946 [2024-12-06 13:18:19.533891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.534062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.544331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eefae0 00:24:30.946 [2024-12-06 13:18:19.545485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.545530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.556115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef0ff8 00:24:30.946 [2024-12-06 13:18:19.557362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.557402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.570672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef8a50 00:24:30.946 [2024-12-06 13:18:19.572550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.572596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:30.946 [2024-12-06 13:18:19.579389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef92c0 00:24:30.946 [2024-12-06 13:18:19.580287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.946 [2024-12-06 13:18:19.580445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.205 [2024-12-06 13:18:19.593958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eeea00 00:24:31.205 [2024-12-06 13:18:19.595560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.205 [2024-12-06 13:18:19.595713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.205 [2024-12-06 13:18:19.605338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ede038 00:24:31.205 [2024-12-06 13:18:19.606656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.205 [2024-12-06 13:18:19.606703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.205 [2024-12-06 13:18:19.617105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef7da8 00:24:31.205 [2024-12-06 13:18:19.618486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.205 [2024-12-06 13:18:19.618523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.205 [2024-12-06 13:18:19.629422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee0ea0 00:24:31.205 [2024-12-06 13:18:19.630707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.205 [2024-12-06 13:18:19.630747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.205 [2024-12-06 13:18:19.640797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eedd58 00:24:31.205 [2024-12-06 13:18:19.641910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.205 [2024-12-06 13:18:19.641949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.205 [2024-12-06 13:18:19.652412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef2510 00:24:31.205 [2024-12-06 13:18:19.653585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.205 [2024-12-06 13:18:19.653619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.205 [2024-12-06 13:18:19.664935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef8618 00:24:31.205 [2024-12-06 13:18:19.666057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.205 [2024-12-06 13:18:19.666101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:31.205 [2024-12-06 13:18:19.681941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef31b8 00:24:31.205 [2024-12-06 13:18:19.684297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.205 [2024-12-06 13:18:19.684483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.205 [2024-12-06 13:18:19.693573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee0a68 00:24:31.205 [2024-12-06 13:18:19.694828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.205 [2024-12-06 13:18:19.694866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.205 [2024-12-06 13:18:19.707794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef7da8 00:24:31.205 [2024-12-06 13:18:19.709867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.205 [2024-12-06 13:18:19.709919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.205 [2024-12-06 13:18:19.717599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016edf988 00:24:31.205 [2024-12-06 13:18:19.718920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.205 [2024-12-06 13:18:19.718956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:31.206 [2024-12-06 13:18:19.732742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee27f0 00:24:31.206 [2024-12-06 13:18:19.734637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.206 [2024-12-06 13:18:19.734678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.206 [2024-12-06 13:18:19.746223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef5be8 00:24:31.206 [2024-12-06 13:18:19.747576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.206 [2024-12-06 13:18:19.747747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.206 [2024-12-06 13:18:19.758062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef2948 00:24:31.206 [2024-12-06 13:18:19.759375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.206 [2024-12-06 13:18:19.759535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.206 [2024-12-06 13:18:19.770163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef7da8 00:24:31.206 [2024-12-06 13:18:19.771513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.206 [2024-12-06 13:18:19.771551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.206 [2024-12-06 13:18:19.782018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee2c28 00:24:31.206 [2024-12-06 13:18:19.783321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.206 [2024-12-06 13:18:19.783488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.206 [2024-12-06 13:18:19.796903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016edfdc0 00:24:31.206 [2024-12-06 13:18:19.798893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.206 [2024-12-06 13:18:19.798932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.206 [2024-12-06 13:18:19.805733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eddc00 00:24:31.206 [2024-12-06 13:18:19.806745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.206 [2024-12-06 13:18:19.806781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.206 [2024-12-06 13:18:19.820628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efc560 00:24:31.206 [2024-12-06 13:18:19.822345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.206 [2024-12-06 13:18:19.822391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:31.206 [2024-12-06 13:18:19.832191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eed4e8 00:24:31.206 [2024-12-06 13:18:19.833613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.206 [2024-12-06 13:18:19.833650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:31.465 [2024-12-06 13:18:19.844371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eea248 00:24:31.465 [2024-12-06 13:18:19.845876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.465 [2024-12-06 13:18:19.845923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.465 [2024-12-06 13:18:19.855952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee4578 00:24:31.465 [2024-12-06 13:18:19.857071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.465 [2024-12-06 13:18:19.857110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:31.465 [2024-12-06 13:18:19.867827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef3e60 00:24:31.465 [2024-12-06 13:18:19.868960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.465 [2024-12-06 13:18:19.868995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.465 [2024-12-06 13:18:19.882410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016edf988 00:24:31.465 [2024-12-06 13:18:19.884161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.465 [2024-12-06 13:18:19.884199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.465 [2024-12-06 13:18:19.891013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef7da8 00:24:31.465 [2024-12-06 13:18:19.891799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.465 [2024-12-06 13:18:19.891840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.465 [2024-12-06 13:18:19.905823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef6458 00:24:31.465 [2024-12-06 13:18:19.907423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.465 [2024-12-06 13:18:19.907591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.465 [2024-12-06 13:18:19.917618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef4b08 00:24:31.465 [2024-12-06 13:18:19.918870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.465 [2024-12-06 13:18:19.919047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.465 [2024-12-06 13:18:19.929530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eeee38 00:24:31.465 [2024-12-06 13:18:19.930753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.465 [2024-12-06 13:18:19.930792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.465 [2024-12-06 13:18:19.944478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eeea00 00:24:31.465 [2024-12-06 13:18:19.946570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.465 [2024-12-06 13:18:19.946610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.465 [2024-12-06 13:18:19.954416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef8e88 00:24:31.465 [2024-12-06 13:18:19.955650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.465 [2024-12-06 13:18:19.955686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.465 [2024-12-06 13:18:19.969262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eeee38 00:24:31.465 [2024-12-06 13:18:19.971163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.465 [2024-12-06 13:18:19.971199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.465 [2024-12-06 13:18:19.977905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee5a90 00:24:31.465 [2024-12-06 13:18:19.978771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.465 [2024-12-06 13:18:19.978807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.466 [2024-12-06 13:18:19.992608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef0788 00:24:31.466 [2024-12-06 13:18:19.994204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.466 [2024-12-06 13:18:19.994252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.466 [2024-12-06 13:18:20.003862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016edf118 00:24:31.466 [2024-12-06 13:18:20.005170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.466 [2024-12-06 13:18:20.005328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.466 [2024-12-06 13:18:20.015368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee3d08 00:24:31.466 [2024-12-06 13:18:20.016770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.466 [2024-12-06 13:18:20.016808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.466 [2024-12-06 13:18:20.029705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef3e60 00:24:31.466 [2024-12-06 13:18:20.031700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.466 [2024-12-06 13:18:20.031738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:31.466 [2024-12-06 13:18:20.038416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef31b8 00:24:31.466 [2024-12-06 13:18:20.039444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.466 [2024-12-06 13:18:20.039489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:31.466 [2024-12-06 13:18:20.053209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eedd58 00:24:31.466 [2024-12-06 13:18:20.054893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.466 [2024-12-06 13:18:20.054930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.466 [2024-12-06 13:18:20.064571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ef96f8 00:24:31.466 [2024-12-06 13:18:20.066025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.466 [2024-12-06 13:18:20.066182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:31.466 [2024-12-06 13:18:20.076371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee1710 00:24:31.466 [2024-12-06 13:18:20.077826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.466 [2024-12-06 13:18:20.077879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.466 [2024-12-06 13:18:20.087853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016edfdc0 00:24:31.466 [2024-12-06 13:18:20.089047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.466 [2024-12-06 13:18:20.089223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.724 [2024-12-06 13:18:20.101502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eec840 00:24:31.724 [2024-12-06 13:18:20.103196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.724 [2024-12-06 13:18:20.103230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.724 [2024-12-06 13:18:20.114248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee1710 00:24:31.724 [2024-12-06 13:18:20.115780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.724 [2024-12-06 13:18:20.115818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.724 [2024-12-06 13:18:20.126608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016edfdc0 00:24:31.724 [2024-12-06 13:18:20.128037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.724 [2024-12-06 13:18:20.128076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.725 [2024-12-06 13:18:20.138096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016eff3c8 00:24:31.725 [2024-12-06 13:18:20.139243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.725 [2024-12-06 13:18:20.139281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:31.725 [2024-12-06 13:18:20.149810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016efb480 00:24:31.725 [2024-12-06 13:18:20.150868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.725 [2024-12-06 13:18:20.150912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.725 [2024-12-06 13:18:20.164143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee6300 00:24:31.725 [2024-12-06 13:18:20.166069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.725 [2024-12-06 13:18:20.166101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:31.725 [2024-12-06 13:18:20.172947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee2c28 00:24:31.725 [2024-12-06 13:18:20.173738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.725 [2024-12-06 13:18:20.173775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.725 [2024-12-06 13:18:20.187992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82acb0) with pdu=0x200016ee8088 00:24:31.725 [2024-12-06 13:18:20.190586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.725 [2024-12-06 13:18:20.190623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.725 19605.50 IOPS, 76.58 MiB/s 00:24:31.725 Latency(us) 00:24:31.725 [2024-12-06T13:18:20.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.725 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:31.725 nvme0n1 : 2.01 19629.08 76.68 0.00 0.00 6514.42 2591.65 21328.99 00:24:31.725 [2024-12-06T13:18:20.365Z] =================================================================================================================== 00:24:31.725 [2024-12-06T13:18:20.365Z] Total : 19629.08 76.68 0.00 0.00 6514.42 2591.65 21328.99 00:24:31.725 { 00:24:31.725 "results": [ 00:24:31.725 { 00:24:31.725 "job": "nvme0n1", 00:24:31.725 "core_mask": "0x2", 00:24:31.725 "workload": "randwrite", 00:24:31.725 "status": "finished", 00:24:31.725 "queue_depth": 128, 00:24:31.725 "io_size": 4096, 00:24:31.725 "runtime": 2.010537, 00:24:31.725 "iops": 19629.08417005009, 00:24:31.725 "mibps": 76.67611003925816, 00:24:31.725 "io_failed": 0, 00:24:31.725 "io_timeout": 0, 00:24:31.725 "avg_latency_us": 6514.419212236389, 00:24:31.725 "min_latency_us": 2591.650909090909, 00:24:31.725 "max_latency_us": 21328.98909090909 00:24:31.725 } 00:24:31.725 ], 00:24:31.725 "core_count": 1 00:24:31.725 } 00:24:31.725 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:31.725 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:31.725 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:31.725 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:31.725 | .driver_specific 00:24:31.725 | .nvme_error 00:24:31.725 | .status_code 00:24:31.725 | .command_transient_transport_error' 00:24:31.983 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 154 > 0 )) 00:24:31.983 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95141 00:24:31.983 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95141 ']' 00:24:31.983 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95141 00:24:31.983 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:31.983 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.983 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95141 00:24:31.983 killing process with pid 95141 00:24:31.983 Received shutdown signal, test time was about 2.000000 seconds 00:24:31.983 00:24:31.983 Latency(us) 00:24:31.983 [2024-12-06T13:18:20.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.983 [2024-12-06T13:18:20.623Z] =================================================================================================================== 00:24:31.983 [2024-12-06T13:18:20.623Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.983 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:31.983 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:31.983 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95141' 00:24:31.983 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95141 00:24:31.983 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95141 00:24:32.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95218 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95218 /var/tmp/bperf.sock 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95218 ']' 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.241 13:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:32.241 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:32.241 Zero copy mechanism will not be used. 00:24:32.241 [2024-12-06 13:18:20.854712] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:32.241 [2024-12-06 13:18:20.854801] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95218 ] 00:24:32.499 [2024-12-06 13:18:21.004920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.499 [2024-12-06 13:18:21.080881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.757 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.757 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:32.757 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:32.757 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:33.017 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:33.017 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.017 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:33.017 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.017 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:33.017 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:33.275 nvme0n1 00:24:33.534 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:33.534 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.534 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:33.534 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.534 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:33.534 13:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:33.534 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:33.534 Zero copy mechanism will not be used. 00:24:33.534 Running I/O for 2 seconds... 00:24:33.534 [2024-12-06 13:18:22.081436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.534 [2024-12-06 13:18:22.081761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.534 [2024-12-06 13:18:22.081947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.534 [2024-12-06 13:18:22.087410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.534 [2024-12-06 13:18:22.087691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.534 [2024-12-06 13:18:22.087716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.534 [2024-12-06 13:18:22.092937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.534 [2024-12-06 13:18:22.093044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.534 [2024-12-06 13:18:22.093068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.534 [2024-12-06 13:18:22.098231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.534 [2024-12-06 13:18:22.098336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.534 [2024-12-06 13:18:22.098360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.534 [2024-12-06 13:18:22.103469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.534 [2024-12-06 13:18:22.103572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.534 [2024-12-06 13:18:22.103595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.534 [2024-12-06 13:18:22.108535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.534 [2024-12-06 13:18:22.108641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.534 [2024-12-06 13:18:22.108665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.534 [2024-12-06 13:18:22.113740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.534 [2024-12-06 13:18:22.113864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.534 [2024-12-06 13:18:22.113888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.534 [2024-12-06 13:18:22.119035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.534 [2024-12-06 13:18:22.119154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.534 [2024-12-06 13:18:22.119181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.534 [2024-12-06 13:18:22.124267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.534 [2024-12-06 13:18:22.124528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.534 [2024-12-06 13:18:22.124562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.535 [2024-12-06 13:18:22.129582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.535 [2024-12-06 13:18:22.129705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.535 [2024-12-06 13:18:22.129728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.535 [2024-12-06 13:18:22.134861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.535 [2024-12-06 13:18:22.134983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.535 [2024-12-06 13:18:22.135006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.535 [2024-12-06 13:18:22.140249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.535 [2024-12-06 13:18:22.140488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.535 [2024-12-06 13:18:22.140511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.535 [2024-12-06 13:18:22.145790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.535 [2024-12-06 13:18:22.145907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.535 [2024-12-06 13:18:22.145930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.535 [2024-12-06 13:18:22.151023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.535 [2024-12-06 13:18:22.151126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.535 [2024-12-06 13:18:22.151154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.535 [2024-12-06 13:18:22.156324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.535 [2024-12-06 13:18:22.156567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.535 [2024-12-06 13:18:22.156590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.535 [2024-12-06 13:18:22.161694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.535 [2024-12-06 13:18:22.161796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.535 [2024-12-06 13:18:22.161829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.535 [2024-12-06 13:18:22.166910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.535 [2024-12-06 13:18:22.167012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.535 [2024-12-06 13:18:22.167035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.795 [2024-12-06 13:18:22.172108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.795 [2024-12-06 13:18:22.172336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.795 [2024-12-06 13:18:22.172358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.795 [2024-12-06 13:18:22.177369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.795 [2024-12-06 13:18:22.177493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.177516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.182602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.182713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.182736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.187851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.188079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.188102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.193248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.193395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.193424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.198443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.198582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.198605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.203644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.203747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.203770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.208935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.209046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.209069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.214295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.214426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.214465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.219559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.219644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.219667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.224901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.225004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.225026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.230280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.230395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.230417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.235568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.235671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.235694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.240938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.241045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.241069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.246319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.246422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.246446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.251604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.251700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.251723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.256913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.257009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.257030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.262167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.262274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.262296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.267530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.267650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.267672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.272901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.273004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.273027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.278271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.278374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.278397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.283509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.283642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.283665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.288876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.288984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.289006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.294327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.294422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.294449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.299593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.299696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.299718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.304973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.305119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.305143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.310435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.310557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.310580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.315791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.796 [2024-12-06 13:18:22.315893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.796 [2024-12-06 13:18:22.315915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.796 [2024-12-06 13:18:22.321187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.321291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.321313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.326670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.326791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.326814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.332096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.332218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.332273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.337446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.337573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.337596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.342852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.342954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.342977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.348503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.348589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.348613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.353788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.353920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.353942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.359211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.359314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.359338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.364454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.364578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.364611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.369812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.369926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.369949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.375211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.375314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.375352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.380549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.380650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.380672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.386011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.386247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.386270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.391541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.391779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.391950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.397035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.397282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.397443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.402462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.402568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.402591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.407695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.407798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.407821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.413009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.413112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.413143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.418414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.418689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.418712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.424099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.424336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.424518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.797 [2024-12-06 13:18:22.429971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:33.797 [2024-12-06 13:18:22.430222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-12-06 13:18:22.430467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.058 [2024-12-06 13:18:22.435420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.058 [2024-12-06 13:18:22.435692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-12-06 13:18:22.435859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.058 [2024-12-06 13:18:22.440943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.058 [2024-12-06 13:18:22.441194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-12-06 13:18:22.441353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.058 [2024-12-06 13:18:22.446348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.058 [2024-12-06 13:18:22.446611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-12-06 13:18:22.446853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.058 [2024-12-06 13:18:22.451777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.058 [2024-12-06 13:18:22.452026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-12-06 13:18:22.452224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.058 [2024-12-06 13:18:22.457267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.058 [2024-12-06 13:18:22.457559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-12-06 13:18:22.457829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.058 [2024-12-06 13:18:22.462691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.058 [2024-12-06 13:18:22.462942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-12-06 13:18:22.463112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.058 [2024-12-06 13:18:22.468197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.058 [2024-12-06 13:18:22.468312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-12-06 13:18:22.468336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.058 [2024-12-06 13:18:22.473512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.058 [2024-12-06 13:18:22.473620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-12-06 13:18:22.473643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.058 [2024-12-06 13:18:22.478854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.058 [2024-12-06 13:18:22.478961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-12-06 13:18:22.478983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.058 [2024-12-06 13:18:22.484273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.484415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.484442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.489670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.489768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.489790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.495025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.495127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.495149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.500524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.500626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.500649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.505975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.506205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.506228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.511452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.511584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.511609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.516883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.516992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.517015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.522192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.522516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.522539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.527626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.527747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.527775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.532932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.533042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.533064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.538286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.538528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.538557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.543872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.543980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.544002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.549339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.549442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.549466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.554877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.554995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.555022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.560212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.560330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.560352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.565636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.565718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.565741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.570876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.570978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.571001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.576197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.576302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.576325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.581369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.581477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.581500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.586677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.586785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.586807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.591962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.592083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.592105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.597388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.597552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.597574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.602709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.602811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.602834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.608017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.608101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.608123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.613510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.613620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.613647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.618961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.619058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.619123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.624213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.059 [2024-12-06 13:18:22.624329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-12-06 13:18:22.624367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.059 [2024-12-06 13:18:22.629596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.060 [2024-12-06 13:18:22.629709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-12-06 13:18:22.629731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.060 [2024-12-06 13:18:22.634975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.060 [2024-12-06 13:18:22.635062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-12-06 13:18:22.635084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.060 [2024-12-06 13:18:22.640208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.060 [2024-12-06 13:18:22.640311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-12-06 13:18:22.640333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.060 [2024-12-06 13:18:22.645474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.060 [2024-12-06 13:18:22.645604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-12-06 13:18:22.645627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.060 [2024-12-06 13:18:22.650814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.060 [2024-12-06 13:18:22.650898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-12-06 13:18:22.650921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.060 [2024-12-06 13:18:22.655979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.060 [2024-12-06 13:18:22.656057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-12-06 13:18:22.656079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.060 [2024-12-06 13:18:22.661180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.060 [2024-12-06 13:18:22.661257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-12-06 13:18:22.661279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.060 [2024-12-06 13:18:22.666441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.060 [2024-12-06 13:18:22.666690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-12-06 13:18:22.666713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.060 [2024-12-06 13:18:22.671862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.060 [2024-12-06 13:18:22.671960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-12-06 13:18:22.671983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.060 [2024-12-06 13:18:22.676934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.060 [2024-12-06 13:18:22.677010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-12-06 13:18:22.677033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.060 [2024-12-06 13:18:22.681975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.060 [2024-12-06 13:18:22.682188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-12-06 13:18:22.682211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.060 [2024-12-06 13:18:22.687347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.060 [2024-12-06 13:18:22.687459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-12-06 13:18:22.687486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.060 [2024-12-06 13:18:22.692567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.060 [2024-12-06 13:18:22.692642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-12-06 13:18:22.692665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.697681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.697789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.697824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.702888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.702972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.702994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.708058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.708204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.708233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.713247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.713337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.713359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.718303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.718533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.718556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.723594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.723681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.723704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.728769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.728858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.728880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.733915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.734128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.734151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.739264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.739350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.739372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.744479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.744642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.744665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.749594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.749693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.749731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.754841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.754918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.754941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.760009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.760108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.760146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.765114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.765344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.765366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.770439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.770537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.320 [2024-12-06 13:18:22.770559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.320 [2024-12-06 13:18:22.775579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.320 [2024-12-06 13:18:22.775654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.775687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.780669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.780768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.780791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.785888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.785964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.785987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.791063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.791141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.791162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.796185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.796402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.796425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.801623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.801721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.801744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.806931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.807028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.807082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.812318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.812597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.812620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.817686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.817763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.817786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.823024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.823107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.823129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.828244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.828522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.828545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.833653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.833767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.833790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.838982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.839064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.839086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.844271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.844532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.844555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.849939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.850169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.850493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.855177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.855406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.855576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.860469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.860713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.860862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.865895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.866136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.866299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.871220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.871517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.871711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.876734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.876964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.877137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.882065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.882299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.882481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.887403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.887652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.887896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.892997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.893298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.893439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.898681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.898915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.899067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.904205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.321 [2024-12-06 13:18:22.904434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.321 [2024-12-06 13:18:22.904689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.321 [2024-12-06 13:18:22.909491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.322 [2024-12-06 13:18:22.909742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.322 [2024-12-06 13:18:22.909898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.322 [2024-12-06 13:18:22.914821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.322 [2024-12-06 13:18:22.915062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.322 [2024-12-06 13:18:22.915253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.322 [2024-12-06 13:18:22.920155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.322 [2024-12-06 13:18:22.920415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.322 [2024-12-06 13:18:22.920650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.322 [2024-12-06 13:18:22.925398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.322 [2024-12-06 13:18:22.925641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.322 [2024-12-06 13:18:22.925768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.322 [2024-12-06 13:18:22.930785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.322 [2024-12-06 13:18:22.930861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.322 [2024-12-06 13:18:22.930884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.322 [2024-12-06 13:18:22.935878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.322 [2024-12-06 13:18:22.936097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.322 [2024-12-06 13:18:22.936124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.322 [2024-12-06 13:18:22.941330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.322 [2024-12-06 13:18:22.941605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.322 [2024-12-06 13:18:22.942033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.322 [2024-12-06 13:18:22.946984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.322 [2024-12-06 13:18:22.947280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.322 [2024-12-06 13:18:22.947472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.322 [2024-12-06 13:18:22.952475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.322 [2024-12-06 13:18:22.952710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.322 [2024-12-06 13:18:22.952867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.582 [2024-12-06 13:18:22.957750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.582 [2024-12-06 13:18:22.958003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.582 [2024-12-06 13:18:22.958173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.582 [2024-12-06 13:18:22.963035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.582 [2024-12-06 13:18:22.963269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.582 [2024-12-06 13:18:22.963432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.582 [2024-12-06 13:18:22.968540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.582 [2024-12-06 13:18:22.968793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.582 [2024-12-06 13:18:22.968949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.582 [2024-12-06 13:18:22.973988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.582 [2024-12-06 13:18:22.974223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.582 [2024-12-06 13:18:22.974438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.582 [2024-12-06 13:18:22.979313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:22.979562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:22.979721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:22.984475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:22.984709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:22.984899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:22.989681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:22.989940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:22.990128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:22.995067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:22.995304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:22.995471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.000387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.000656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.000875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.005891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.006104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.006129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.011433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.011792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.011975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.016801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.017043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.017232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.022209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.022441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.022601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.027654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.027889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.028046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.033041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.033276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.033468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.038223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.038319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.038342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.043456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.043681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.043704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.048899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.048979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.049002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.054156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.054241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.054264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.059386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.059615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.059638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.064840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.064917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.064940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.070124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.070227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.070250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.075567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.075669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.075693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.583 5785.00 IOPS, 723.12 MiB/s [2024-12-06T13:18:23.223Z] [2024-12-06 13:18:23.081986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.082073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.082101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.087063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.087321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.087344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.092435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.092535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.092558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.097658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.097736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.097758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.102904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.103020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.103043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.108165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.108251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.108274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.113356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.113435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.113458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.118821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.583 [2024-12-06 13:18:23.118915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.583 [2024-12-06 13:18:23.118938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.583 [2024-12-06 13:18:23.124157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.124259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.124281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.129396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.129487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.129510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.134608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.134702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.134729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.139893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.139997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.140019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.145150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.145322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.145352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.150371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.150479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.150512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.155757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.155900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.155922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.161034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.161134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.161156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.166154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.166267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.166294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.171330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.171585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.171608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.176677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.176777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.176799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.181837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.181909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.181932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.186938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.187156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.187183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.192330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.192406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.192429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.197605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.197691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.197714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.202785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.202862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.202884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.207760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.207863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.207891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.212774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.212852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.212875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.584 [2024-12-06 13:18:23.217795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.584 [2024-12-06 13:18:23.217897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.584 [2024-12-06 13:18:23.217920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.844 [2024-12-06 13:18:23.222821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.844 [2024-12-06 13:18:23.222934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.844 [2024-12-06 13:18:23.222962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.844 [2024-12-06 13:18:23.227814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.844 [2024-12-06 13:18:23.227890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.844 [2024-12-06 13:18:23.227913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.844 [2024-12-06 13:18:23.233053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.844 [2024-12-06 13:18:23.233131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.844 [2024-12-06 13:18:23.233153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.844 [2024-12-06 13:18:23.238132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.844 [2024-12-06 13:18:23.238259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.844 [2024-12-06 13:18:23.238286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.844 [2024-12-06 13:18:23.243256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.844 [2024-12-06 13:18:23.243356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.844 [2024-12-06 13:18:23.243379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.844 [2024-12-06 13:18:23.248319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.248430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.248466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.253272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.253389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.253415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.258337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.258446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.258468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.263420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.263515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.263538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.268607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.268684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.268709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.273664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.273743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.273765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.278680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.278758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.278780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.283676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.283749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.283773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.288803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.288889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.288913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.294014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.294143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.294170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.299066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.299179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.299201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.304133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.304232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.304255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.309231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.309309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.309332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.314382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.314479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.314503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.319574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.319729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.319752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.324650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.324755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.324778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.329679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.329762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.329784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.334801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.334896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.334918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.339887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.339957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.339980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.344860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.345007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.345034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.350446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.350560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.350587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.355567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.355676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.355697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.360691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.360781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.360805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.365764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.365867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.365889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.370862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.370955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.370977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.375947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.376050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.376077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.381057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.845 [2024-12-06 13:18:23.381140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.845 [2024-12-06 13:18:23.381162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.845 [2024-12-06 13:18:23.386067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.386158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.386182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.391201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.391310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.391338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.396342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.396442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.396478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.401342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.401426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.401463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.406291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.406375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.406396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.411265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.411347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.411370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.416401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.416518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.416541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.421586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.421681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.421704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.426749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.426828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.426851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.431755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.431878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.431900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.436838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.436914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.436936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.442003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.442082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.442106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.447013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.447095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.447118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.451968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.452047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.452069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.457033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.457111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.457133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.462174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.462266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.462289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.467303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.467392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.467414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.472428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.472561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.472583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.846 [2024-12-06 13:18:23.477841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:34.846 [2024-12-06 13:18:23.477936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.846 [2024-12-06 13:18:23.477959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.187 [2024-12-06 13:18:23.482966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.187 [2024-12-06 13:18:23.483055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.187 [2024-12-06 13:18:23.483078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.187 [2024-12-06 13:18:23.488073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.187 [2024-12-06 13:18:23.488163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.187 [2024-12-06 13:18:23.488200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.187 [2024-12-06 13:18:23.493246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.187 [2024-12-06 13:18:23.493324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.187 [2024-12-06 13:18:23.493346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.187 [2024-12-06 13:18:23.498559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.187 [2024-12-06 13:18:23.498645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.187 [2024-12-06 13:18:23.498667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.187 [2024-12-06 13:18:23.504011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.187 [2024-12-06 13:18:23.504090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.187 [2024-12-06 13:18:23.504112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.187 [2024-12-06 13:18:23.509136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.187 [2024-12-06 13:18:23.509246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.187 [2024-12-06 13:18:23.509270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.187 [2024-12-06 13:18:23.514429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.187 [2024-12-06 13:18:23.514562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.187 [2024-12-06 13:18:23.514589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.187 [2024-12-06 13:18:23.519677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.187 [2024-12-06 13:18:23.519754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.187 [2024-12-06 13:18:23.519777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.187 [2024-12-06 13:18:23.524946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.187 [2024-12-06 13:18:23.525025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.187 [2024-12-06 13:18:23.525047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.187 [2024-12-06 13:18:23.530150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.187 [2024-12-06 13:18:23.530233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.187 [2024-12-06 13:18:23.530261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.187 [2024-12-06 13:18:23.535567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.187 [2024-12-06 13:18:23.535642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.187 [2024-12-06 13:18:23.535665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.187 [2024-12-06 13:18:23.540807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.187 [2024-12-06 13:18:23.540924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.187 [2024-12-06 13:18:23.540947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.546065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.546161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.546184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.551484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.551590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.551612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.556972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.557071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.557094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.562434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.562537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.562573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.567768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.567859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.567895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.573211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.573309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.573332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.578562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.578640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.578662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.583780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.583878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.583900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.589200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.589285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.589309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.594630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.594741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.594767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.599951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.600039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.600061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.605236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.605314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.605337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.610659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.610769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.610796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.615797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.615873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.615895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.621043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.621169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.621205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.626384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.626461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.626497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.631589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.631697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.631734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.636950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.637022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.637045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.642346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.642428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.642452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.647638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.188 [2024-12-06 13:18:23.647742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.188 [2024-12-06 13:18:23.647764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.188 [2024-12-06 13:18:23.653060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.653138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.653160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.658381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.658483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.658511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.663645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.663735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.663772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.669134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.669240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.669262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.674418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.674517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.674541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.679690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.679833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.679857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.685002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.685079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.685101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.690332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.690438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.690460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.695719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.695805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.695829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.701148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.701226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.701249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.706632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.706746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.706770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.712086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.712165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.712187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.717370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.717460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.717484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.722679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.722795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.722818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.727873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.727947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.727969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.733044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.733161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.733185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.738484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.738627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.738650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.743548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.743627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.743650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.748825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.748924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.748947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.753854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.753932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.753965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.758970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.759069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.189 [2024-12-06 13:18:23.759091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.189 [2024-12-06 13:18:23.764064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.189 [2024-12-06 13:18:23.764171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.190 [2024-12-06 13:18:23.764193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.190 [2024-12-06 13:18:23.769374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.190 [2024-12-06 13:18:23.769492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.190 [2024-12-06 13:18:23.769515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.190 [2024-12-06 13:18:23.775896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.190 [2024-12-06 13:18:23.776034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.190 [2024-12-06 13:18:23.776061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.190 [2024-12-06 13:18:23.781338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.190 [2024-12-06 13:18:23.781440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.190 [2024-12-06 13:18:23.781467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.190 [2024-12-06 13:18:23.786657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.190 [2024-12-06 13:18:23.786755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.190 [2024-12-06 13:18:23.786787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.190 [2024-12-06 13:18:23.791888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.190 [2024-12-06 13:18:23.791991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.190 [2024-12-06 13:18:23.792025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.190 [2024-12-06 13:18:23.797158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.190 [2024-12-06 13:18:23.797261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.190 [2024-12-06 13:18:23.797286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.190 [2024-12-06 13:18:23.802435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.190 [2024-12-06 13:18:23.802561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.190 [2024-12-06 13:18:23.802587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.190 [2024-12-06 13:18:23.807797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.190 [2024-12-06 13:18:23.807875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.190 [2024-12-06 13:18:23.807901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.190 [2024-12-06 13:18:23.813373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.190 [2024-12-06 13:18:23.813519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.190 [2024-12-06 13:18:23.813546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.450 [2024-12-06 13:18:23.818893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.450 [2024-12-06 13:18:23.818999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.450 [2024-12-06 13:18:23.819024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.450 [2024-12-06 13:18:23.824271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.450 [2024-12-06 13:18:23.824372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.450 [2024-12-06 13:18:23.824405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.450 [2024-12-06 13:18:23.829619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.450 [2024-12-06 13:18:23.829742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.450 [2024-12-06 13:18:23.829768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.450 [2024-12-06 13:18:23.835233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.450 [2024-12-06 13:18:23.835364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.450 [2024-12-06 13:18:23.835390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.450 [2024-12-06 13:18:23.840605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.450 [2024-12-06 13:18:23.840701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.450 [2024-12-06 13:18:23.840734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.450 [2024-12-06 13:18:23.845910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.450 [2024-12-06 13:18:23.846012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.450 [2024-12-06 13:18:23.846038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.450 [2024-12-06 13:18:23.851183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.450 [2024-12-06 13:18:23.851288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.450 [2024-12-06 13:18:23.851312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.450 [2024-12-06 13:18:23.856479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.450 [2024-12-06 13:18:23.856570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.450 [2024-12-06 13:18:23.856595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.450 [2024-12-06 13:18:23.861956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.450 [2024-12-06 13:18:23.862069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.450 [2024-12-06 13:18:23.862098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.450 [2024-12-06 13:18:23.867281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.450 [2024-12-06 13:18:23.867425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.450 [2024-12-06 13:18:23.867449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.450 [2024-12-06 13:18:23.872879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.450 [2024-12-06 13:18:23.872976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.450 [2024-12-06 13:18:23.873001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.450 [2024-12-06 13:18:23.878240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.450 [2024-12-06 13:18:23.878325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.878348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.883480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.883620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.883642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.888783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.888939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.888976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.894053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.894139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.894162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.899364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.899478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.899502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.904628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.904783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.904807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.909855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.909944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.909969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.915250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.915353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.915382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.920535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.920616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.920640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.925976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.926056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.926080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.931290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.931372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.931394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.936594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.936698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.936720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.941857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.941957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.941981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.947123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.947206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.947228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.952399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.952510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.952533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.957566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.957644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.957667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.962816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.962911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.962938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.968107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.968249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.968271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.973509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.973647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.973681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.978923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.979013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.979035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.984202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.984321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.984343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.989786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.989873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.989896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:23.995390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:23.995484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:23.995507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:24.000662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:24.000743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:24.000765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:24.005913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:24.005984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:24.006006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:24.011087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:24.011164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:24.011187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:24.016363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:24.016447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:24.016471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:24.021494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:24.021600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:24.021622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:24.026585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:24.026664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:24.026687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:24.031926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:24.032006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:24.032029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:24.037304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:24.037392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:24.037412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.451 [2024-12-06 13:18:24.042476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.451 [2024-12-06 13:18:24.042557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.451 [2024-12-06 13:18:24.042580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.452 [2024-12-06 13:18:24.047602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.452 [2024-12-06 13:18:24.047689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.452 [2024-12-06 13:18:24.047711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.452 [2024-12-06 13:18:24.052857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.452 [2024-12-06 13:18:24.052935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.452 [2024-12-06 13:18:24.052956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.452 [2024-12-06 13:18:24.058175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.452 [2024-12-06 13:18:24.058254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.452 [2024-12-06 13:18:24.058277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.452 [2024-12-06 13:18:24.063472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.452 [2024-12-06 13:18:24.063573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.452 [2024-12-06 13:18:24.063595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:35.452 [2024-12-06 13:18:24.068813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.452 [2024-12-06 13:18:24.068909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.452 [2024-12-06 13:18:24.068936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:35.452 [2024-12-06 13:18:24.074306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.452 [2024-12-06 13:18:24.074396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.452 [2024-12-06 13:18:24.074418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:35.452 5842.50 IOPS, 730.31 MiB/s [2024-12-06T13:18:24.092Z] [2024-12-06 13:18:24.080986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x82aff0) with pdu=0x200016eff3c8 00:24:35.452 [2024-12-06 13:18:24.081086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.452 [2024-12-06 13:18:24.081109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:35.452 00:24:35.452 Latency(us) 00:24:35.452 [2024-12-06T13:18:24.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.452 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:35.452 nvme0n1 : 2.00 5837.89 729.74 0.00 0.00 2734.20 1884.16 6404.65 00:24:35.452 [2024-12-06T13:18:24.092Z] =================================================================================================================== 00:24:35.452 [2024-12-06T13:18:24.092Z] Total : 5837.89 729.74 0.00 0.00 2734.20 1884.16 6404.65 00:24:35.452 { 00:24:35.452 "results": [ 00:24:35.452 { 00:24:35.452 "job": "nvme0n1", 00:24:35.452 "core_mask": "0x2", 00:24:35.452 "workload": "randwrite", 00:24:35.452 "status": "finished", 00:24:35.452 "queue_depth": 16, 00:24:35.452 "io_size": 131072, 00:24:35.452 "runtime": 2.004149, 00:24:35.452 "iops": 5837.88929864995, 00:24:35.452 "mibps": 729.7361623312438, 00:24:35.452 "io_failed": 0, 00:24:35.452 "io_timeout": 0, 00:24:35.452 "avg_latency_us": 2734.197756021756, 00:24:35.452 "min_latency_us": 1884.16, 00:24:35.452 "max_latency_us": 6404.654545454546 00:24:35.452 } 00:24:35.452 ], 00:24:35.452 "core_count": 1 00:24:35.452 } 00:24:35.710 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:35.710 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:35.710 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:35.710 | .driver_specific 00:24:35.710 | .nvme_error 00:24:35.710 | .status_code 00:24:35.710 | .command_transient_transport_error' 00:24:35.710 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:35.968 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 378 > 0 )) 00:24:35.968 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95218 00:24:35.968 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95218 ']' 00:24:35.968 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95218 00:24:35.968 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:35.968 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.968 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95218 00:24:35.968 killing process with pid 95218 00:24:35.968 Received shutdown signal, test time was about 2.000000 seconds 00:24:35.968 00:24:35.968 Latency(us) 00:24:35.968 [2024-12-06T13:18:24.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.968 [2024-12-06T13:18:24.609Z] =================================================================================================================== 00:24:35.969 [2024-12-06T13:18:24.609Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.969 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:35.969 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:35.969 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95218' 00:24:35.969 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95218 00:24:35.969 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95218 00:24:36.227 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94925 00:24:36.227 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94925 ']' 00:24:36.227 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94925 00:24:36.227 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:36.228 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.228 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94925 00:24:36.228 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:36.228 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:36.228 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94925' 00:24:36.228 killing process with pid 94925 00:24:36.228 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94925 00:24:36.228 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94925 00:24:36.486 00:24:36.486 real 0m17.749s 00:24:36.486 user 0m34.715s 00:24:36.486 sys 0m4.501s 00:24:36.486 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.486 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:36.486 ************************************ 00:24:36.486 END TEST nvmf_digest_error 00:24:36.486 ************************************ 00:24:36.486 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:36.486 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:36.486 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:36.486 13:18:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.418 rmmod nvme_tcp 00:24:37.418 rmmod nvme_fabrics 00:24:37.418 rmmod nvme_keyring 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 94925 ']' 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 94925 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 94925 ']' 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 94925 00:24:37.418 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94925) - No such process 00:24:37.418 Process with pid 94925 is not found 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 94925 is not found' 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:37.418 13:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:37.418 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:37.418 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:37.418 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:37.418 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:37.418 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:37.418 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:24:37.676 00:24:37.676 real 0m37.896s 00:24:37.676 user 1m11.425s 00:24:37.676 sys 0m9.377s 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:37.676 ************************************ 00:24:37.676 END TEST nvmf_digest 00:24:37.676 ************************************ 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.676 ************************************ 00:24:37.676 START TEST nvmf_mdns_discovery 00:24:37.676 ************************************ 00:24:37.676 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:24:37.676 * Looking for test storage... 00:24:37.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:37.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.934 --rc genhtml_branch_coverage=1 00:24:37.934 --rc genhtml_function_coverage=1 00:24:37.934 --rc genhtml_legend=1 00:24:37.934 --rc geninfo_all_blocks=1 00:24:37.934 --rc geninfo_unexecuted_blocks=1 00:24:37.934 00:24:37.934 ' 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:37.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.934 --rc genhtml_branch_coverage=1 00:24:37.934 --rc genhtml_function_coverage=1 00:24:37.934 --rc genhtml_legend=1 00:24:37.934 --rc geninfo_all_blocks=1 00:24:37.934 --rc geninfo_unexecuted_blocks=1 00:24:37.934 00:24:37.934 ' 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:37.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.934 --rc genhtml_branch_coverage=1 00:24:37.934 --rc genhtml_function_coverage=1 00:24:37.934 --rc genhtml_legend=1 00:24:37.934 --rc geninfo_all_blocks=1 00:24:37.934 --rc geninfo_unexecuted_blocks=1 00:24:37.934 00:24:37.934 ' 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:37.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.934 --rc genhtml_branch_coverage=1 00:24:37.934 --rc genhtml_function_coverage=1 00:24:37.934 --rc genhtml_legend=1 00:24:37.934 --rc geninfo_all_blocks=1 00:24:37.934 --rc geninfo_unexecuted_blocks=1 00:24:37.934 00:24:37.934 ' 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.934 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:37.934 Cannot find device "nvmf_init_br" 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:37.934 Cannot find device "nvmf_init_br2" 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:37.934 Cannot find device "nvmf_tgt_br" 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:37.934 Cannot find device "nvmf_tgt_br2" 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:37.934 Cannot find device "nvmf_init_br" 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:37.934 Cannot find device "nvmf_init_br2" 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:37.934 Cannot find device "nvmf_tgt_br" 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:37.934 Cannot find device "nvmf_tgt_br2" 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:37.934 Cannot find device "nvmf_br" 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:37.934 Cannot find device "nvmf_init_if" 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:24:37.934 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:38.193 Cannot find device "nvmf_init_if2" 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:38.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:38.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:38.193 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:38.193 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:24:38.193 00:24:38.193 --- 10.0.0.3 ping statistics --- 00:24:38.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.193 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:38.193 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:38.193 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:24:38.193 00:24:38.193 --- 10.0.0.4 ping statistics --- 00:24:38.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.193 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:38.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:24:38.193 00:24:38.193 --- 10.0.0.1 ping statistics --- 00:24:38.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.193 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:38.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:24:38.193 00:24:38.193 --- 10.0.0.2 ping statistics --- 00:24:38.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.193 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.193 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.451 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:38.451 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.451 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.451 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.451 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=95557 00:24:38.451 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:38.451 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 95557 00:24:38.451 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95557 ']' 00:24:38.451 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.451 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.451 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.451 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.451 13:18:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.451 [2024-12-06 13:18:26.917694] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:38.451 [2024-12-06 13:18:26.917810] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.451 [2024-12-06 13:18:27.069839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.709 [2024-12-06 13:18:27.130299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.709 [2024-12-06 13:18:27.130352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.709 [2024-12-06 13:18:27.130366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.709 [2024-12-06 13:18:27.130377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.709 [2024-12-06 13:18:27.130386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.709 [2024-12-06 13:18:27.130864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.709 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.709 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:38.709 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.709 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.709 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.709 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.709 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:24:38.709 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.709 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.709 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.709 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:24:38.709 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.709 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.967 [2024-12-06 13:18:27.354785] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.967 [2024-12-06 13:18:27.366923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.967 null0 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.967 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.968 null1 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.968 null2 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.968 null3 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=95598 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 95598 /tmp/host.sock 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95598 ']' 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.968 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.968 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.968 [2024-12-06 13:18:27.476300] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:38.968 [2024-12-06 13:18:27.476401] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95598 ] 00:24:39.225 [2024-12-06 13:18:27.627365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.225 [2024-12-06 13:18:27.691477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.225 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.225 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:39.225 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:24:39.225 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:24:39.225 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:24:39.482 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=95614 00:24:39.482 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:24:39.482 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:24:39.482 13:18:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:24:39.482 Process 1062 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:24:39.482 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:24:39.482 Successfully dropped root privileges. 00:24:39.482 avahi-daemon 0.8 starting up. 00:24:39.482 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:24:39.482 Successfully called chroot(). 00:24:39.482 Successfully dropped remaining capabilities. 00:24:39.482 No service file found in /etc/avahi/services. 00:24:40.413 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:24:40.413 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:24:40.414 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:24:40.414 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:24:40.414 Network interface enumeration completed. 00:24:40.414 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:24:40.414 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:24:40.414 Registering new address record for fe80::44d7:a4ff:fe56:1733 on nvmf_tgt_if.*. 00:24:40.414 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:24:40.414 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 1658983772. 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:40.414 13:18:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.414 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:24:40.414 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:24:40.414 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.414 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.414 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.414 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:40.414 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:40.414 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:40.414 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.671 [2024-12-06 13:18:29.252669] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:40.671 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.930 [2024-12-06 13:18:29.315400] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.930 13:18:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:24:41.865 [2024-12-06 13:18:30.152681] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:24:42.165 [2024-12-06 13:18:30.552688] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:24:42.165 [2024-12-06 13:18:30.552732] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:24:42.165 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:42.165 cookie is 0 00:24:42.165 is_local: 1 00:24:42.165 our_own: 0 00:24:42.165 wide_area: 0 00:24:42.165 multicast: 1 00:24:42.165 cached: 1 00:24:42.165 [2024-12-06 13:18:30.652683] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:24:42.165 [2024-12-06 13:18:30.652722] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:24:42.165 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:42.165 cookie is 0 00:24:42.165 is_local: 1 00:24:42.165 our_own: 0 00:24:42.165 wide_area: 0 00:24:42.165 multicast: 1 00:24:42.165 cached: 1 00:24:43.100 [2024-12-06 13:18:31.554053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.100 [2024-12-06 13:18:31.554113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1168720 with addr=10.0.0.4, port=8009 00:24:43.100 [2024-12-06 13:18:31.554149] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:43.100 [2024-12-06 13:18:31.554169] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:43.100 [2024-12-06 13:18:31.554180] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:24:43.100 [2024-12-06 13:18:31.660830] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:43.100 [2024-12-06 13:18:31.660874] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:43.100 [2024-12-06 13:18:31.660896] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:43.359 [2024-12-06 13:18:31.746987] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:24:43.359 [2024-12-06 13:18:31.801532] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:24:43.359 [2024-12-06 13:18:31.802438] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x119daf0:1 started. 00:24:43.359 [2024-12-06 13:18:31.804489] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:24:43.359 [2024-12-06 13:18:31.804512] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:43.359 [2024-12-06 13:18:31.809289] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x119daf0 was disconnected and freed. delete nvme_qpair. 00:24:43.944 [2024-12-06 13:18:32.554016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.944 [2024-12-06 13:18:32.554091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119d900 with addr=10.0.0.4, port=8009 00:24:43.944 [2024-12-06 13:18:32.554118] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:43.944 [2024-12-06 13:18:32.554144] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:43.944 [2024-12-06 13:18:32.554154] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:24:45.321 [2024-12-06 13:18:33.553997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.321 [2024-12-06 13:18:33.554065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11866a0 with addr=10.0.0.4, port=8009 00:24:45.321 [2024-12-06 13:18:33.554091] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:45.321 [2024-12-06 13:18:33.554102] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:45.321 [2024-12-06 13:18:33.554117] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:24:45.888 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:24:45.888 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:45.888 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.888 [2024-12-06 13:18:34.405727] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:24:45.888 [2024-12-06 13:18:34.408311] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:45.888 [2024-12-06 13:18:34.408349] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.888 [2024-12-06 13:18:34.413618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:24:45.888 [2024-12-06 13:18:34.414288] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.888 13:18:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:24:46.147 [2024-12-06 13:18:34.545478] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:46.147 [2024-12-06 13:18:34.545541] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:46.147 [2024-12-06 13:18:34.561059] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:24:46.147 [2024-12-06 13:18:34.561086] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:24:46.147 [2024-12-06 13:18:34.561105] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:24:46.147 [2024-12-06 13:18:34.632235] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:46.147 [2024-12-06 13:18:34.647161] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:24:46.147 [2024-12-06 13:18:34.701638] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:24:46.147 [2024-12-06 13:18:34.702339] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x119ae60:1 started. 00:24:46.147 [2024-12-06 13:18:34.703930] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:24:46.147 [2024-12-06 13:18:34.703959] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:24:46.147 [2024-12-06 13:18:34.709735] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x119ae60 was disconnected and freed. delete nvme_qpair. 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:24:47.081 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:24:47.081 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:24:47.081 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:24:47.081 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:47.081 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:47.081 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:47.081 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:47.081 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.082 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.340 [2024-12-06 13:18:35.852709] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:24:47.340 [2024-12-06 13:18:35.852741] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:24:47.340 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:47.340 cookie is 0 00:24:47.340 is_local: 1 00:24:47.340 our_own: 0 00:24:47.340 wide_area: 0 00:24:47.340 multicast: 1 00:24:47.340 cached: 1 00:24:47.340 [2024-12-06 13:18:35.852757] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.340 [2024-12-06 13:18:35.865663] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11a2eb0:1 started. 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.340 13:18:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:24:47.340 [2024-12-06 13:18:35.870020] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11a2eb0 was disconnected and freed. delete nvme_qpair. 00:24:47.340 [2024-12-06 13:18:35.873009] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x1199e40:1 started. 00:24:47.340 [2024-12-06 13:18:35.879843] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x1199e40 was disconnected and freed. delete nvme_qpair. 00:24:47.340 [2024-12-06 13:18:35.952715] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:24:47.340 [2024-12-06 13:18:35.952751] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:24:47.340 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:47.340 cookie is 0 00:24:47.340 is_local: 1 00:24:47.340 our_own: 0 00:24:47.340 wide_area: 0 00:24:47.340 multicast: 1 00:24:47.340 cached: 1 00:24:47.340 [2024-12-06 13:18:35.952766] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:24:48.275 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:24:48.275 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.275 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.275 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.275 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:48.275 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:48.275 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.534 13:18:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.534 [2024-12-06 13:18:36.999209] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:48.534 [2024-12-06 13:18:36.999841] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:48.534 [2024-12-06 13:18:36.999874] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:48.534 [2024-12-06 13:18:36.999912] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:24:48.534 [2024-12-06 13:18:36.999927] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:24:48.534 13:18:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.534 13:18:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:24:48.534 13:18:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.534 13:18:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.534 [2024-12-06 13:18:37.011233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:24:48.534 [2024-12-06 13:18:37.011877] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:48.534 [2024-12-06 13:18:37.011944] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:24:48.535 13:18:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.535 13:18:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:24:48.535 [2024-12-06 13:18:37.141966] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:24:48.535 [2024-12-06 13:18:37.142963] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:24:48.794 [2024-12-06 13:18:37.205285] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:24:48.794 [2024-12-06 13:18:37.205359] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:24:48.794 [2024-12-06 13:18:37.205371] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:24:48.794 [2024-12-06 13:18:37.205377] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:24:48.794 [2024-12-06 13:18:37.205397] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:24:48.794 [2024-12-06 13:18:37.205442] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:24:48.794 [2024-12-06 13:18:37.205493] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:24:48.794 [2024-12-06 13:18:37.205519] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:48.794 [2024-12-06 13:18:37.205525] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:48.794 [2024-12-06 13:18:37.205541] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:48.794 [2024-12-06 13:18:37.251090] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:48.794 [2024-12-06 13:18:37.251124] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:48.794 [2024-12-06 13:18:37.251192] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:24:48.794 [2024-12-06 13:18:37.251201] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:24:49.731 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.732 [2024-12-06 13:18:38.348653] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:49.732 [2024-12-06 13:18:38.348695] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:49.732 [2024-12-06 13:18:38.348735] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:24:49.732 [2024-12-06 13:18:38.348751] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.732 [2024-12-06 13:18:38.354673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.732 [2024-12-06 13:18:38.354710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.732 [2024-12-06 13:18:38.354724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.732 [2024-12-06 13:18:38.354735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.732 [2024-12-06 13:18:38.354745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.732 [2024-12-06 13:18:38.354755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.732 [2024-12-06 13:18:38.354765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.732 [2024-12-06 13:18:38.354775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.732 [2024-12-06 13:18:38.354785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a400 is same with the state(6) to be set 00:24:49.732 [2024-12-06 13:18:38.360687] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:49.732 [2024-12-06 13:18:38.360751] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:24:49.732 [2024-12-06 13:18:38.362713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.732 [2024-12-06 13:18:38.362747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.732 [2024-12-06 13:18:38.362761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.732 [2024-12-06 13:18:38.362777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.732 [2024-12-06 13:18:38.362788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.732 [2024-12-06 13:18:38.362798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.732 [2024-12-06 13:18:38.362809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.732 [2024-12-06 13:18:38.362818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.732 [2024-12-06 13:18:38.362829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188060 is same with the state(6) to be set 00:24:49.732 [2024-12-06 13:18:38.364655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a400 (9): Bad file descriptor 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.732 13:18:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:24:49.994 [2024-12-06 13:18:38.372676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188060 (9): Bad file descriptor 00:24:49.994 [2024-12-06 13:18:38.374674] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:49.994 [2024-12-06 13:18:38.374700] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:49.994 [2024-12-06 13:18:38.374707] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:49.994 [2024-12-06 13:18:38.374714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:49.994 [2024-12-06 13:18:38.374746] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:49.994 [2024-12-06 13:18:38.374854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.994 [2024-12-06 13:18:38.374877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117a400 with addr=10.0.0.3, port=4420 00:24:49.994 [2024-12-06 13:18:38.374891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a400 is same with the state(6) to be set 00:24:49.994 [2024-12-06 13:18:38.374908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a400 (9): Bad file descriptor 00:24:49.994 [2024-12-06 13:18:38.374937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:49.994 [2024-12-06 13:18:38.374949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:49.994 [2024-12-06 13:18:38.374961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:49.994 [2024-12-06 13:18:38.374972] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:49.994 [2024-12-06 13:18:38.374979] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:49.994 [2024-12-06 13:18:38.374985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:49.994 [2024-12-06 13:18:38.382685] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:24:49.994 [2024-12-06 13:18:38.382710] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:24:49.994 [2024-12-06 13:18:38.382717] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:24:49.994 [2024-12-06 13:18:38.382723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:24:49.994 [2024-12-06 13:18:38.382748] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:24:49.994 [2024-12-06 13:18:38.382822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.994 [2024-12-06 13:18:38.382843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1188060 with addr=10.0.0.4, port=4420 00:24:49.994 [2024-12-06 13:18:38.382855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188060 is same with the state(6) to be set 00:24:49.994 [2024-12-06 13:18:38.382871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188060 (9): Bad file descriptor 00:24:49.994 [2024-12-06 13:18:38.382885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:24:49.994 [2024-12-06 13:18:38.382894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:24:49.994 [2024-12-06 13:18:38.382905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:24:49.994 [2024-12-06 13:18:38.382914] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:24:49.994 [2024-12-06 13:18:38.382921] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:24:49.994 [2024-12-06 13:18:38.382927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:24:49.994 [2024-12-06 13:18:38.384756] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:49.994 [2024-12-06 13:18:38.384780] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:49.994 [2024-12-06 13:18:38.384786] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:49.994 [2024-12-06 13:18:38.384792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:49.994 [2024-12-06 13:18:38.384815] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:49.994 [2024-12-06 13:18:38.384864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.994 [2024-12-06 13:18:38.384883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117a400 with addr=10.0.0.3, port=4420 00:24:49.994 [2024-12-06 13:18:38.384894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a400 is same with the state(6) to be set 00:24:49.994 [2024-12-06 13:18:38.384909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a400 (9): Bad file descriptor 00:24:49.994 [2024-12-06 13:18:38.384923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:49.994 [2024-12-06 13:18:38.384933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:49.994 [2024-12-06 13:18:38.384943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:49.994 [2024-12-06 13:18:38.384952] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:49.994 [2024-12-06 13:18:38.384958] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:49.994 [2024-12-06 13:18:38.384963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:49.995 [2024-12-06 13:18:38.392761] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:24:49.995 [2024-12-06 13:18:38.392792] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:24:49.995 [2024-12-06 13:18:38.392798] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:24:49.995 [2024-12-06 13:18:38.392804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:24:49.995 [2024-12-06 13:18:38.392833] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:24:49.995 [2024-12-06 13:18:38.392895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.995 [2024-12-06 13:18:38.392916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1188060 with addr=10.0.0.4, port=4420 00:24:49.995 [2024-12-06 13:18:38.392928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188060 is same with the state(6) to be set 00:24:49.995 [2024-12-06 13:18:38.392944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188060 (9): Bad file descriptor 00:24:49.995 [2024-12-06 13:18:38.392959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:24:49.995 [2024-12-06 13:18:38.392968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:24:49.995 [2024-12-06 13:18:38.392978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:24:49.995 [2024-12-06 13:18:38.392987] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:24:49.995 [2024-12-06 13:18:38.392993] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:24:49.995 [2024-12-06 13:18:38.392999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:24:49.995 [2024-12-06 13:18:38.394825] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:49.995 [2024-12-06 13:18:38.394848] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:49.995 [2024-12-06 13:18:38.394854] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:49.995 [2024-12-06 13:18:38.394860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:49.995 [2024-12-06 13:18:38.394881] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:49.995 [2024-12-06 13:18:38.394930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.995 [2024-12-06 13:18:38.394949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117a400 with addr=10.0.0.3, port=4420 00:24:49.995 [2024-12-06 13:18:38.394960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a400 is same with the state(6) to be set 00:24:49.995 [2024-12-06 13:18:38.394975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a400 (9): Bad file descriptor 00:24:49.995 [2024-12-06 13:18:38.395000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:49.995 [2024-12-06 13:18:38.395010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:49.995 [2024-12-06 13:18:38.395020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:49.995 [2024-12-06 13:18:38.395029] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:49.995 [2024-12-06 13:18:38.395035] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:49.995 [2024-12-06 13:18:38.395041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:49.995 [2024-12-06 13:18:38.402845] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:24:49.995 [2024-12-06 13:18:38.402874] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:24:49.995 [2024-12-06 13:18:38.402881] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:24:49.995 [2024-12-06 13:18:38.402887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:24:49.995 [2024-12-06 13:18:38.402912] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:24:49.995 [2024-12-06 13:18:38.402974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.995 [2024-12-06 13:18:38.402995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1188060 with addr=10.0.0.4, port=4420 00:24:49.995 [2024-12-06 13:18:38.403006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188060 is same with the state(6) to be set 00:24:49.995 [2024-12-06 13:18:38.403023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188060 (9): Bad file descriptor 00:24:49.995 [2024-12-06 13:18:38.403048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:24:49.995 [2024-12-06 13:18:38.403059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:24:49.995 [2024-12-06 13:18:38.403069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:24:49.995 [2024-12-06 13:18:38.403078] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:24:49.995 [2024-12-06 13:18:38.403084] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:24:49.995 [2024-12-06 13:18:38.403090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:24:49.995 [2024-12-06 13:18:38.404891] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:49.995 [2024-12-06 13:18:38.404916] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:49.995 [2024-12-06 13:18:38.404923] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:49.995 [2024-12-06 13:18:38.404929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:49.995 [2024-12-06 13:18:38.404950] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:49.995 [2024-12-06 13:18:38.405009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.995 [2024-12-06 13:18:38.405029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117a400 with addr=10.0.0.3, port=4420 00:24:49.995 [2024-12-06 13:18:38.405040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a400 is same with the state(6) to be set 00:24:49.995 [2024-12-06 13:18:38.405056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a400 (9): Bad file descriptor 00:24:49.995 [2024-12-06 13:18:38.405071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:49.995 [2024-12-06 13:18:38.405081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:49.995 [2024-12-06 13:18:38.405090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:49.995 [2024-12-06 13:18:38.405099] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:49.995 [2024-12-06 13:18:38.405105] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:49.995 [2024-12-06 13:18:38.405111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:49.995 [2024-12-06 13:18:38.412924] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:24:49.995 [2024-12-06 13:18:38.412957] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:24:49.995 [2024-12-06 13:18:38.412964] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:24:49.995 [2024-12-06 13:18:38.412970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:24:49.995 [2024-12-06 13:18:38.412997] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:24:49.995 [2024-12-06 13:18:38.413058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.995 [2024-12-06 13:18:38.413078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1188060 with addr=10.0.0.4, port=4420 00:24:49.995 [2024-12-06 13:18:38.413090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188060 is same with the state(6) to be set 00:24:49.995 [2024-12-06 13:18:38.413106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188060 (9): Bad file descriptor 00:24:49.995 [2024-12-06 13:18:38.413121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:24:49.996 [2024-12-06 13:18:38.413130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:24:49.996 [2024-12-06 13:18:38.413140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:24:49.996 [2024-12-06 13:18:38.413151] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:24:49.996 [2024-12-06 13:18:38.413157] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:24:49.996 [2024-12-06 13:18:38.413163] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:24:49.996 [2024-12-06 13:18:38.414961] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:49.996 [2024-12-06 13:18:38.414985] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:49.996 [2024-12-06 13:18:38.414991] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:49.996 [2024-12-06 13:18:38.414997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:49.996 [2024-12-06 13:18:38.415019] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:49.996 [2024-12-06 13:18:38.415070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.996 [2024-12-06 13:18:38.415090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117a400 with addr=10.0.0.3, port=4420 00:24:49.996 [2024-12-06 13:18:38.415101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a400 is same with the state(6) to be set 00:24:49.996 [2024-12-06 13:18:38.415128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a400 (9): Bad file descriptor 00:24:49.996 [2024-12-06 13:18:38.415145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:49.996 [2024-12-06 13:18:38.415154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:49.996 [2024-12-06 13:18:38.415164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:49.996 [2024-12-06 13:18:38.415173] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:49.996 [2024-12-06 13:18:38.415179] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:49.996 [2024-12-06 13:18:38.415185] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:49.996 [2024-12-06 13:18:38.423011] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:24:49.996 [2024-12-06 13:18:38.423041] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:24:49.996 [2024-12-06 13:18:38.423048] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:24:49.996 [2024-12-06 13:18:38.423053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:24:49.996 [2024-12-06 13:18:38.423079] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:24:49.996 [2024-12-06 13:18:38.423151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.996 [2024-12-06 13:18:38.423172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1188060 with addr=10.0.0.4, port=4420 00:24:49.996 [2024-12-06 13:18:38.423183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188060 is same with the state(6) to be set 00:24:49.996 [2024-12-06 13:18:38.423200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188060 (9): Bad file descriptor 00:24:49.996 [2024-12-06 13:18:38.423215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:24:49.996 [2024-12-06 13:18:38.423224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:24:49.996 [2024-12-06 13:18:38.423234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:24:49.996 [2024-12-06 13:18:38.423243] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:24:49.996 [2024-12-06 13:18:38.423249] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:24:49.996 [2024-12-06 13:18:38.423255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:24:49.996 [2024-12-06 13:18:38.425029] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:49.996 [2024-12-06 13:18:38.425050] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:49.996 [2024-12-06 13:18:38.425061] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:49.996 [2024-12-06 13:18:38.425066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:49.996 [2024-12-06 13:18:38.425086] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:49.996 [2024-12-06 13:18:38.425139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.996 [2024-12-06 13:18:38.425158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117a400 with addr=10.0.0.3, port=4420 00:24:49.996 [2024-12-06 13:18:38.425168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a400 is same with the state(6) to be set 00:24:49.996 [2024-12-06 13:18:38.425184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a400 (9): Bad file descriptor 00:24:49.996 [2024-12-06 13:18:38.425198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:49.996 [2024-12-06 13:18:38.425207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:49.996 [2024-12-06 13:18:38.425217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:49.996 [2024-12-06 13:18:38.425225] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:49.996 [2024-12-06 13:18:38.425231] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:49.996 [2024-12-06 13:18:38.425237] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:49.996 [2024-12-06 13:18:38.433090] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:24:49.996 [2024-12-06 13:18:38.433115] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:24:49.996 [2024-12-06 13:18:38.433121] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:24:49.996 [2024-12-06 13:18:38.433127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:24:49.996 [2024-12-06 13:18:38.433148] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:24:49.996 [2024-12-06 13:18:38.433201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.996 [2024-12-06 13:18:38.433220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1188060 with addr=10.0.0.4, port=4420 00:24:49.996 [2024-12-06 13:18:38.433231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188060 is same with the state(6) to be set 00:24:49.996 [2024-12-06 13:18:38.433246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188060 (9): Bad file descriptor 00:24:49.996 [2024-12-06 13:18:38.433260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:24:49.996 [2024-12-06 13:18:38.433269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:24:49.996 [2024-12-06 13:18:38.433279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:24:49.996 [2024-12-06 13:18:38.433288] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:24:49.996 [2024-12-06 13:18:38.433294] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:24:49.996 [2024-12-06 13:18:38.433299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:24:49.996 [2024-12-06 13:18:38.435096] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:49.996 [2024-12-06 13:18:38.435121] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:49.996 [2024-12-06 13:18:38.435127] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:49.996 [2024-12-06 13:18:38.435133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:49.997 [2024-12-06 13:18:38.435153] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:49.997 [2024-12-06 13:18:38.435200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.997 [2024-12-06 13:18:38.435219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117a400 with addr=10.0.0.3, port=4420 00:24:49.997 [2024-12-06 13:18:38.435230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a400 is same with the state(6) to be set 00:24:49.997 [2024-12-06 13:18:38.435245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a400 (9): Bad file descriptor 00:24:49.997 [2024-12-06 13:18:38.435259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:49.997 [2024-12-06 13:18:38.435269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:49.997 [2024-12-06 13:18:38.435278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:49.997 [2024-12-06 13:18:38.435287] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:49.997 [2024-12-06 13:18:38.435293] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:49.997 [2024-12-06 13:18:38.435298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:49.997 [2024-12-06 13:18:38.443158] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:24:49.997 [2024-12-06 13:18:38.443182] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:24:49.997 [2024-12-06 13:18:38.443189] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:24:49.997 [2024-12-06 13:18:38.443194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:24:49.997 [2024-12-06 13:18:38.443215] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:24:49.997 [2024-12-06 13:18:38.443267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.997 [2024-12-06 13:18:38.443287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1188060 with addr=10.0.0.4, port=4420 00:24:49.997 [2024-12-06 13:18:38.443298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188060 is same with the state(6) to be set 00:24:49.997 [2024-12-06 13:18:38.443314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188060 (9): Bad file descriptor 00:24:49.997 [2024-12-06 13:18:38.443329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:24:49.997 [2024-12-06 13:18:38.443338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:24:49.997 [2024-12-06 13:18:38.443347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:24:49.997 [2024-12-06 13:18:38.443356] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:24:49.997 [2024-12-06 13:18:38.443362] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:24:49.997 [2024-12-06 13:18:38.443367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:24:49.997 [2024-12-06 13:18:38.445162] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:49.997 [2024-12-06 13:18:38.445184] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:49.997 [2024-12-06 13:18:38.445190] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:49.997 [2024-12-06 13:18:38.445195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:49.997 [2024-12-06 13:18:38.445216] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:49.997 [2024-12-06 13:18:38.445262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.997 [2024-12-06 13:18:38.445281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117a400 with addr=10.0.0.3, port=4420 00:24:49.997 [2024-12-06 13:18:38.445292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a400 is same with the state(6) to be set 00:24:49.997 [2024-12-06 13:18:38.445307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a400 (9): Bad file descriptor 00:24:49.997 [2024-12-06 13:18:38.445321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:49.997 [2024-12-06 13:18:38.445330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:49.997 [2024-12-06 13:18:38.445340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:49.997 [2024-12-06 13:18:38.445348] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:49.997 [2024-12-06 13:18:38.445354] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:49.997 [2024-12-06 13:18:38.445359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:49.997 [2024-12-06 13:18:38.453225] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:24:49.997 [2024-12-06 13:18:38.453252] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:24:49.997 [2024-12-06 13:18:38.453259] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:24:49.997 [2024-12-06 13:18:38.453265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:24:49.997 [2024-12-06 13:18:38.453287] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:24:49.997 [2024-12-06 13:18:38.453340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.997 [2024-12-06 13:18:38.453359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1188060 with addr=10.0.0.4, port=4420 00:24:49.997 [2024-12-06 13:18:38.453371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188060 is same with the state(6) to be set 00:24:49.997 [2024-12-06 13:18:38.453386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188060 (9): Bad file descriptor 00:24:49.997 [2024-12-06 13:18:38.453401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:24:49.997 [2024-12-06 13:18:38.453410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:24:49.997 [2024-12-06 13:18:38.453420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:24:49.997 [2024-12-06 13:18:38.453429] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:24:49.997 [2024-12-06 13:18:38.453435] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:24:49.997 [2024-12-06 13:18:38.453440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:24:49.997 [2024-12-06 13:18:38.455226] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:49.997 [2024-12-06 13:18:38.455251] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:49.997 [2024-12-06 13:18:38.455257] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:49.997 [2024-12-06 13:18:38.455263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:49.997 [2024-12-06 13:18:38.455283] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:49.997 [2024-12-06 13:18:38.455331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.997 [2024-12-06 13:18:38.455350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117a400 with addr=10.0.0.3, port=4420 00:24:49.997 [2024-12-06 13:18:38.455361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a400 is same with the state(6) to be set 00:24:49.997 [2024-12-06 13:18:38.455377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a400 (9): Bad file descriptor 00:24:49.997 [2024-12-06 13:18:38.455391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:49.997 [2024-12-06 13:18:38.455400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:49.997 [2024-12-06 13:18:38.455409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:49.997 [2024-12-06 13:18:38.455417] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:49.997 [2024-12-06 13:18:38.455423] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:49.997 [2024-12-06 13:18:38.455429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:49.998 [2024-12-06 13:18:38.463297] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:24:49.998 [2024-12-06 13:18:38.463321] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:24:49.998 [2024-12-06 13:18:38.463327] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:24:49.998 [2024-12-06 13:18:38.463333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:24:49.998 [2024-12-06 13:18:38.463353] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:24:49.998 [2024-12-06 13:18:38.463402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.998 [2024-12-06 13:18:38.463420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1188060 with addr=10.0.0.4, port=4420 00:24:49.998 [2024-12-06 13:18:38.463431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188060 is same with the state(6) to be set 00:24:49.998 [2024-12-06 13:18:38.463446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188060 (9): Bad file descriptor 00:24:49.998 [2024-12-06 13:18:38.463473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:24:49.998 [2024-12-06 13:18:38.463483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:24:49.998 [2024-12-06 13:18:38.463492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:24:49.998 [2024-12-06 13:18:38.463502] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:24:49.998 [2024-12-06 13:18:38.463508] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:24:49.998 [2024-12-06 13:18:38.463513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:24:49.998 [2024-12-06 13:18:38.465294] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:49.998 [2024-12-06 13:18:38.465316] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:49.998 [2024-12-06 13:18:38.465322] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:49.998 [2024-12-06 13:18:38.465328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:49.998 [2024-12-06 13:18:38.465347] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:49.998 [2024-12-06 13:18:38.465394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.998 [2024-12-06 13:18:38.465412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117a400 with addr=10.0.0.3, port=4420 00:24:49.998 [2024-12-06 13:18:38.465423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a400 is same with the state(6) to be set 00:24:49.998 [2024-12-06 13:18:38.465438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a400 (9): Bad file descriptor 00:24:49.998 [2024-12-06 13:18:38.465466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:49.998 [2024-12-06 13:18:38.465478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:49.998 [2024-12-06 13:18:38.465487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:49.998 [2024-12-06 13:18:38.465495] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:49.998 [2024-12-06 13:18:38.465501] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:49.998 [2024-12-06 13:18:38.465507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:49.998 [2024-12-06 13:18:38.473364] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:24:49.998 [2024-12-06 13:18:38.473389] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:24:49.998 [2024-12-06 13:18:38.473396] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:24:49.998 [2024-12-06 13:18:38.473401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:24:49.998 [2024-12-06 13:18:38.473423] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:24:49.998 [2024-12-06 13:18:38.473493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.998 [2024-12-06 13:18:38.473514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1188060 with addr=10.0.0.4, port=4420 00:24:49.998 [2024-12-06 13:18:38.473525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188060 is same with the state(6) to be set 00:24:49.998 [2024-12-06 13:18:38.473542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188060 (9): Bad file descriptor 00:24:49.998 [2024-12-06 13:18:38.473556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:24:49.998 [2024-12-06 13:18:38.473565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:24:49.998 [2024-12-06 13:18:38.473575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:24:49.998 [2024-12-06 13:18:38.473583] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:24:49.998 [2024-12-06 13:18:38.473589] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:24:49.998 [2024-12-06 13:18:38.473595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:24:49.998 [2024-12-06 13:18:38.475358] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:49.998 [2024-12-06 13:18:38.475381] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:49.998 [2024-12-06 13:18:38.475388] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:49.998 [2024-12-06 13:18:38.475394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:49.998 [2024-12-06 13:18:38.475414] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:49.998 [2024-12-06 13:18:38.475472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.998 [2024-12-06 13:18:38.475492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117a400 with addr=10.0.0.3, port=4420 00:24:49.998 [2024-12-06 13:18:38.475504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a400 is same with the state(6) to be set 00:24:49.998 [2024-12-06 13:18:38.475520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a400 (9): Bad file descriptor 00:24:49.998 [2024-12-06 13:18:38.475534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:49.998 [2024-12-06 13:18:38.475544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:49.998 [2024-12-06 13:18:38.475553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:49.998 [2024-12-06 13:18:38.475562] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:49.998 [2024-12-06 13:18:38.475568] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:49.998 [2024-12-06 13:18:38.475574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:49.998 [2024-12-06 13:18:38.483433] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:24:49.998 [2024-12-06 13:18:38.483467] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:24:49.998 [2024-12-06 13:18:38.483475] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:24:49.998 [2024-12-06 13:18:38.483481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:24:49.998 [2024-12-06 13:18:38.483502] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:24:49.998 [2024-12-06 13:18:38.483558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.999 [2024-12-06 13:18:38.483578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1188060 with addr=10.0.0.4, port=4420 00:24:49.999 [2024-12-06 13:18:38.483588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188060 is same with the state(6) to be set 00:24:49.999 [2024-12-06 13:18:38.483604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1188060 (9): Bad file descriptor 00:24:49.999 [2024-12-06 13:18:38.483619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:24:49.999 [2024-12-06 13:18:38.483628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:24:49.999 [2024-12-06 13:18:38.483637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:24:49.999 [2024-12-06 13:18:38.483646] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:24:49.999 [2024-12-06 13:18:38.483652] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:24:49.999 [2024-12-06 13:18:38.483657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:24:49.999 [2024-12-06 13:18:38.485423] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:49.999 [2024-12-06 13:18:38.485445] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:49.999 [2024-12-06 13:18:38.485464] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:49.999 [2024-12-06 13:18:38.485470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:49.999 [2024-12-06 13:18:38.485490] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:49.999 [2024-12-06 13:18:38.485536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.999 [2024-12-06 13:18:38.485555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117a400 with addr=10.0.0.3, port=4420 00:24:49.999 [2024-12-06 13:18:38.485570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a400 is same with the state(6) to be set 00:24:49.999 [2024-12-06 13:18:38.485585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a400 (9): Bad file descriptor 00:24:49.999 [2024-12-06 13:18:38.485616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:49.999 [2024-12-06 13:18:38.485626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:49.999 [2024-12-06 13:18:38.485635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:49.999 [2024-12-06 13:18:38.485644] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:49.999 [2024-12-06 13:18:38.485650] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:49.999 [2024-12-06 13:18:38.485655] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:49.999 [2024-12-06 13:18:38.490739] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:24:49.999 [2024-12-06 13:18:38.490771] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:49.999 [2024-12-06 13:18:38.490804] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:49.999 [2024-12-06 13:18:38.491765] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:24:49.999 [2024-12-06 13:18:38.491795] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:24:49.999 [2024-12-06 13:18:38.491816] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:24:49.999 [2024-12-06 13:18:38.576810] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:49.999 [2024-12-06 13:18:38.577831] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:51.008 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.009 13:18:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:24:51.267 [2024-12-06 13:18:39.652697] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.203 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.462 [2024-12-06 13:18:40.891790] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:24:52.462 2024/12/06 13:18:40 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:24:52.462 request: 00:24:52.462 { 00:24:52.462 "method": "bdev_nvme_start_mdns_discovery", 00:24:52.462 "params": { 00:24:52.462 "name": "mdns", 00:24:52.462 "svcname": "_nvme-disc._http", 00:24:52.462 "hostnqn": "nqn.2021-12.io.spdk:test" 00:24:52.462 } 00:24:52.462 } 00:24:52.462 Got JSON-RPC error response 00:24:52.462 GoRPCClient: error on JSON-RPC call 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:52.462 13:18:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:24:53.029 [2024-12-06 13:18:41.480456] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:24:53.029 [2024-12-06 13:18:41.580444] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:24:53.287 [2024-12-06 13:18:41.680449] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:24:53.287 [2024-12-06 13:18:41.680476] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:24:53.287 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:53.287 cookie is 0 00:24:53.287 is_local: 1 00:24:53.287 our_own: 0 00:24:53.287 wide_area: 0 00:24:53.287 multicast: 1 00:24:53.287 cached: 1 00:24:53.287 [2024-12-06 13:18:41.780448] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:24:53.287 [2024-12-06 13:18:41.780477] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:24:53.287 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:53.287 cookie is 0 00:24:53.287 is_local: 1 00:24:53.287 our_own: 0 00:24:53.287 wide_area: 0 00:24:53.287 multicast: 1 00:24:53.287 cached: 1 00:24:53.287 [2024-12-06 13:18:41.780489] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:24:53.287 [2024-12-06 13:18:41.880447] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:24:53.287 [2024-12-06 13:18:41.880476] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:24:53.287 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:53.287 cookie is 0 00:24:53.287 is_local: 1 00:24:53.287 our_own: 0 00:24:53.287 wide_area: 0 00:24:53.287 multicast: 1 00:24:53.287 cached: 1 00:24:53.545 [2024-12-06 13:18:41.980455] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:24:53.545 [2024-12-06 13:18:41.980492] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:24:53.545 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:53.545 cookie is 0 00:24:53.545 is_local: 1 00:24:53.545 our_own: 0 00:24:53.545 wide_area: 0 00:24:53.545 multicast: 1 00:24:53.545 cached: 1 00:24:53.545 [2024-12-06 13:18:41.980504] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:24:54.112 [2024-12-06 13:18:42.692780] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:24:54.112 [2024-12-06 13:18:42.692816] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:24:54.112 [2024-12-06 13:18:42.692837] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:24:54.370 [2024-12-06 13:18:42.779030] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:24:54.370 [2024-12-06 13:18:42.837584] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:24:54.370 [2024-12-06 13:18:42.838492] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x12d4e90:1 started. 00:24:54.370 [2024-12-06 13:18:42.840285] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:24:54.370 [2024-12-06 13:18:42.840311] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:24:54.370 [2024-12-06 13:18:42.841715] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x12d4e90 was disconnected and freed. delete nvme_qpair. 00:24:54.370 [2024-12-06 13:18:42.892728] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:54.370 [2024-12-06 13:18:42.892765] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:54.370 [2024-12-06 13:18:42.892785] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:54.370 [2024-12-06 13:18:42.978917] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:24:54.627 [2024-12-06 13:18:43.037347] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:24:54.627 [2024-12-06 13:18:43.038058] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x11771f0:1 started. 00:24:54.627 [2024-12-06 13:18:43.039955] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:24:54.627 [2024-12-06 13:18:43.039984] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:54.627 [2024-12-06 13:18:43.041608] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x11771f0 was disconnected and freed. delete nvme_qpair. 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.949 13:18:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.949 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:57.949 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:24:57.949 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.949 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.949 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.949 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:57.949 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:57.949 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:57.949 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.949 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:57.949 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:57.949 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:57.949 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.950 [2024-12-06 13:18:46.073143] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:24:57.950 2024/12/06 13:18:46 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:24:57.950 request: 00:24:57.950 { 00:24:57.950 "method": "bdev_nvme_start_mdns_discovery", 00:24:57.950 "params": { 00:24:57.950 "name": "cdc", 00:24:57.950 "svcname": "_nvme-disc._tcp", 00:24:57.950 "hostnqn": "nqn.2021-12.io.spdk:test" 00:24:57.950 } 00:24:57.950 } 00:24:57.950 Got JSON-RPC error response 00:24:57.950 GoRPCClient: error on JSON-RPC call 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:24:57.950 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:24:57.950 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:24:57.950 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:24:57.950 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:57.950 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:57.950 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:57.950 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.950 13:18:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:24:57.950 [2024-12-06 13:18:46.280436] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:24:58.883 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:24:58.883 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:58.883 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 95598 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 95598 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 95614 00:24:58.883 Got SIGTERM, quitting. 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:58.883 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:24:58.883 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:24:58.883 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:24:58.883 avahi-daemon 0.8 exiting. 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.142 rmmod nvme_tcp 00:24:59.142 rmmod nvme_fabrics 00:24:59.142 rmmod nvme_keyring 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 95557 ']' 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 95557 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 95557 ']' 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 95557 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95557 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:59.142 killing process with pid 95557 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95557' 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 95557 00:24:59.142 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 95557 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:59.400 13:18:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:59.400 13:18:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:59.400 13:18:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:59.400 13:18:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.400 13:18:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.400 13:18:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.657 13:18:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:24:59.657 00:24:59.657 real 0m21.828s 00:24:59.657 user 0m42.625s 00:24:59.657 sys 0m2.142s 00:24:59.657 13:18:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:59.657 13:18:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.657 ************************************ 00:24:59.657 END TEST nvmf_mdns_discovery 00:24:59.657 ************************************ 00:24:59.657 13:18:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:24:59.657 13:18:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:59.657 13:18:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:59.657 13:18:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:59.657 13:18:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.657 ************************************ 00:24:59.657 START TEST nvmf_host_multipath 00:24:59.657 ************************************ 00:24:59.657 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:59.657 * Looking for test storage... 00:24:59.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:59.657 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:59.657 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:24:59.657 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:59.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.917 --rc genhtml_branch_coverage=1 00:24:59.917 --rc genhtml_function_coverage=1 00:24:59.917 --rc genhtml_legend=1 00:24:59.917 --rc geninfo_all_blocks=1 00:24:59.917 --rc geninfo_unexecuted_blocks=1 00:24:59.917 00:24:59.917 ' 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:59.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.917 --rc genhtml_branch_coverage=1 00:24:59.917 --rc genhtml_function_coverage=1 00:24:59.917 --rc genhtml_legend=1 00:24:59.917 --rc geninfo_all_blocks=1 00:24:59.917 --rc geninfo_unexecuted_blocks=1 00:24:59.917 00:24:59.917 ' 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:59.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.917 --rc genhtml_branch_coverage=1 00:24:59.917 --rc genhtml_function_coverage=1 00:24:59.917 --rc genhtml_legend=1 00:24:59.917 --rc geninfo_all_blocks=1 00:24:59.917 --rc geninfo_unexecuted_blocks=1 00:24:59.917 00:24:59.917 ' 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:59.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.917 --rc genhtml_branch_coverage=1 00:24:59.917 --rc genhtml_function_coverage=1 00:24:59.917 --rc genhtml_legend=1 00:24:59.917 --rc geninfo_all_blocks=1 00:24:59.917 --rc geninfo_unexecuted_blocks=1 00:24:59.917 00:24:59.917 ' 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.917 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:59.918 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:59.918 Cannot find device "nvmf_init_br" 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:59.918 Cannot find device "nvmf_init_br2" 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:59.918 Cannot find device "nvmf_tgt_br" 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:59.918 Cannot find device "nvmf_tgt_br2" 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:59.918 Cannot find device "nvmf_init_br" 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:59.918 Cannot find device "nvmf_init_br2" 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:59.918 Cannot find device "nvmf_tgt_br" 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:59.918 Cannot find device "nvmf_tgt_br2" 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:59.918 Cannot find device "nvmf_br" 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:59.918 Cannot find device "nvmf_init_if" 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:59.918 Cannot find device "nvmf_init_if2" 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:59.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:59.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:59.918 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:00.176 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:00.176 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:00.176 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:00.176 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:00.176 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:00.176 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:00.176 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:00.176 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:00.176 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:00.176 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:00.176 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:00.177 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:00.177 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:25:00.177 00:25:00.177 --- 10.0.0.3 ping statistics --- 00:25:00.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.177 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:00.177 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:00.177 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.103 ms 00:25:00.177 00:25:00.177 --- 10.0.0.4 ping statistics --- 00:25:00.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.177 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:00.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:25:00.177 00:25:00.177 --- 10.0.0.1 ping statistics --- 00:25:00.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.177 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:00.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:25:00.177 00:25:00.177 --- 10.0.0.2 ping statistics --- 00:25:00.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.177 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=96261 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 96261 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 96261 ']' 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.177 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:00.177 [2024-12-06 13:18:48.780544] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:25:00.177 [2024-12-06 13:18:48.780634] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.435 [2024-12-06 13:18:48.934880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:00.435 [2024-12-06 13:18:49.002009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.435 [2024-12-06 13:18:49.002090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.435 [2024-12-06 13:18:49.002113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.435 [2024-12-06 13:18:49.002130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.435 [2024-12-06 13:18:49.002145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.435 [2024-12-06 13:18:49.003595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.435 [2024-12-06 13:18:49.003616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.692 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.692 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:25:00.692 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:00.692 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.692 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:00.692 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.692 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=96261 00:25:00.692 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:01.004 [2024-12-06 13:18:49.481976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.004 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:01.276 Malloc0 00:25:01.276 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:01.535 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:02.098 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:02.098 [2024-12-06 13:18:50.696272] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:02.098 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:02.355 [2024-12-06 13:18:50.956370] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:02.355 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96350 00:25:02.355 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:02.355 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:02.355 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96350 /var/tmp/bdevperf.sock 00:25:02.355 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 96350 ']' 00:25:02.355 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.355 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.355 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.355 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.355 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:03.728 13:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.728 13:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:25:03.728 13:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:03.728 13:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:04.293 Nvme0n1 00:25:04.293 13:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:04.551 Nvme0n1 00:25:04.551 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:25:04.551 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:05.484 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:25:05.484 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:06.050 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:06.309 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:25:06.309 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96445 00:25:06.309 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96261 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:06.309 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:12.876 13:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:12.876 13:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:12.876 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:12.876 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:12.876 Attaching 4 probes... 00:25:12.876 @path[10.0.0.3, 4421]: 17142 00:25:12.876 @path[10.0.0.3, 4421]: 17511 00:25:12.876 @path[10.0.0.3, 4421]: 17461 00:25:12.876 @path[10.0.0.3, 4421]: 17645 00:25:12.876 @path[10.0.0.3, 4421]: 17617 00:25:12.876 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:12.876 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:12.876 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:12.876 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:12.876 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:12.876 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:12.876 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96445 00:25:12.876 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:12.876 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:25:12.876 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:12.876 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:13.147 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:25:13.147 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96261 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:13.147 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96584 00:25:13.147 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:19.747 13:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:19.747 13:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:19.747 13:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:19.747 13:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:19.747 Attaching 4 probes... 00:25:19.747 @path[10.0.0.3, 4420]: 17533 00:25:19.747 @path[10.0.0.3, 4420]: 17773 00:25:19.747 @path[10.0.0.3, 4420]: 17726 00:25:19.747 @path[10.0.0.3, 4420]: 17740 00:25:19.747 @path[10.0.0.3, 4420]: 17907 00:25:19.747 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:19.747 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:19.747 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:19.748 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:19.748 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:19.748 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:19.748 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96584 00:25:19.748 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:19.748 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:19.748 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:19.748 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:20.006 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:20.006 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96709 00:25:20.006 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96261 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:20.006 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:26.596 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:26.596 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:26.596 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:26.596 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:26.596 Attaching 4 probes... 00:25:26.596 @path[10.0.0.3, 4421]: 14316 00:25:26.596 @path[10.0.0.3, 4421]: 17879 00:25:26.596 @path[10.0.0.3, 4421]: 17714 00:25:26.596 @path[10.0.0.3, 4421]: 17835 00:25:26.596 @path[10.0.0.3, 4421]: 17890 00:25:26.597 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:26.597 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:26.597 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:26.597 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:26.597 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:26.597 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:26.597 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96709 00:25:26.597 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:26.597 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:26.597 13:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:26.597 13:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:26.853 13:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:26.853 13:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96261 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:26.853 13:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96845 00:25:26.853 13:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:33.468 Attaching 4 probes... 00:25:33.468 00:25:33.468 00:25:33.468 00:25:33.468 00:25:33.468 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96845 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:25:33.468 13:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:33.468 13:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:33.726 13:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:25:33.726 13:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96980 00:25:33.726 13:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96261 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:33.726 13:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:40.313 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:40.313 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:40.313 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:40.313 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:40.313 Attaching 4 probes... 00:25:40.313 @path[10.0.0.3, 4421]: 16805 00:25:40.313 @path[10.0.0.3, 4421]: 17425 00:25:40.313 @path[10.0.0.3, 4421]: 17217 00:25:40.313 @path[10.0.0.3, 4421]: 17329 00:25:40.313 @path[10.0.0.3, 4421]: 17181 00:25:40.313 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:40.313 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:40.313 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:40.313 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:40.313 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:40.313 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:40.313 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96980 00:25:40.313 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:40.313 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:40.313 [2024-12-06 13:19:28.919659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.919995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.313 [2024-12-06 13:19:28.920533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 [2024-12-06 13:19:28.920750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8b10 is same with the state(6) to be set 00:25:40.314 13:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:25:41.686 13:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:25:41.686 13:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97110 00:25:41.686 13:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:41.686 13:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96261 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:48.247 13:19:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:48.247 13:19:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:48.248 13:19:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:48.248 13:19:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:48.248 Attaching 4 probes... 00:25:48.248 @path[10.0.0.3, 4420]: 17116 00:25:48.248 @path[10.0.0.3, 4420]: 17347 00:25:48.248 @path[10.0.0.3, 4420]: 17189 00:25:48.248 @path[10.0.0.3, 4420]: 17303 00:25:48.248 @path[10.0.0.3, 4420]: 16630 00:25:48.248 13:19:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:48.248 13:19:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:48.248 13:19:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:48.248 13:19:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:48.248 13:19:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:48.248 13:19:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:48.248 13:19:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97110 00:25:48.248 13:19:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:48.248 13:19:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:48.248 [2024-12-06 13:19:36.558472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:48.248 13:19:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:48.505 13:19:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:25:55.059 13:19:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:25:55.059 13:19:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97304 00:25:55.059 13:19:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96261 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:55.059 13:19:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:00.335 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:00.335 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:00.593 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:00.593 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:00.593 Attaching 4 probes... 00:26:00.593 @path[10.0.0.3, 4421]: 17024 00:26:00.593 @path[10.0.0.3, 4421]: 16790 00:26:00.593 @path[10.0.0.3, 4421]: 16903 00:26:00.593 @path[10.0.0.3, 4421]: 16747 00:26:00.593 @path[10.0.0.3, 4421]: 17044 00:26:00.593 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:00.593 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:00.593 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:00.593 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:00.593 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:00.593 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:00.593 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97304 00:26:00.593 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:00.593 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96350 00:26:00.594 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 96350 ']' 00:26:00.594 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 96350 00:26:00.594 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:26:00.594 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.594 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96350 00:26:00.594 killing process with pid 96350 00:26:00.594 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:00.594 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:00.594 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96350' 00:26:00.594 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 96350 00:26:00.594 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 96350 00:26:00.594 { 00:26:00.594 "results": [ 00:26:00.594 { 00:26:00.594 "job": "Nvme0n1", 00:26:00.594 "core_mask": "0x4", 00:26:00.594 "workload": "verify", 00:26:00.594 "status": "terminated", 00:26:00.594 "verify_range": { 00:26:00.594 "start": 0, 00:26:00.594 "length": 16384 00:26:00.594 }, 00:26:00.594 "queue_depth": 128, 00:26:00.594 "io_size": 4096, 00:26:00.594 "runtime": 56.013473, 00:26:00.594 "iops": 7445.7979065143845, 00:26:00.594 "mibps": 29.085148072321815, 00:26:00.594 "io_failed": 0, 00:26:00.594 "io_timeout": 0, 00:26:00.594 "avg_latency_us": 17158.89951252857, 00:26:00.594 "min_latency_us": 565.9927272727273, 00:26:00.594 "max_latency_us": 7046430.72 00:26:00.594 } 00:26:00.594 ], 00:26:00.594 "core_count": 1 00:26:00.594 } 00:26:00.863 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96350 00:26:00.863 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:00.863 [2024-12-06 13:18:51.038366] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:26:00.863 [2024-12-06 13:18:51.038521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96350 ] 00:26:00.863 [2024-12-06 13:18:51.189020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.863 [2024-12-06 13:18:51.261136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.863 Running I/O for 90 seconds... 00:26:00.863 9052.00 IOPS, 35.36 MiB/s [2024-12-06T13:19:49.503Z] 9039.50 IOPS, 35.31 MiB/s [2024-12-06T13:19:49.503Z] 8963.33 IOPS, 35.01 MiB/s [2024-12-06T13:19:49.503Z] 8923.50 IOPS, 34.86 MiB/s [2024-12-06T13:19:49.503Z] 8873.20 IOPS, 34.66 MiB/s [2024-12-06T13:19:49.503Z] 8877.83 IOPS, 34.68 MiB/s [2024-12-06T13:19:49.503Z] 8863.57 IOPS, 34.62 MiB/s [2024-12-06T13:19:49.503Z] 8852.75 IOPS, 34.58 MiB/s [2024-12-06T13:19:49.503Z] [2024-12-06 13:19:01.659301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.659967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.659989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.660005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.660027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.660043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.660065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.660081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.660103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.660119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.660141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.660157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.660179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.660195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.660217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.660233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.660268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.660288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:00.863 [2024-12-06 13:19:01.660321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.863 [2024-12-06 13:19:01.660344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.660369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.660386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.660408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.660424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.660446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.660481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.660505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.660522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.660545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.660563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.660586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.660603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.661637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.864 [2024-12-06 13:19:01.661670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.661699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.661719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.661742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.661766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.661788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.661819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.661846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.661863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.661885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.661915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.661939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.661956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.661979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.661996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:00.864 [2024-12-06 13:19:01.662724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.864 [2024-12-06 13:19:01.662740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.662764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.662780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.662802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.662819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.662841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.662858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.662880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.662896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.662927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.662945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.662968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.662985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.663972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.663994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.664011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.664033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.664049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.664072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.664088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.664111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.664127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.664149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.664166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.664188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.664205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.664228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.664245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.664268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.664285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.664307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.664330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.865 [2024-12-06 13:19:01.664354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.865 [2024-12-06 13:19:01.664370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.664968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.664984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:00.866 [2024-12-06 13:19:01.665656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.866 [2024-12-06 13:19:01.665671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:01.665694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:01.665709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:01.665731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:01.665747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:01.665769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:01.665786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:01.665835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:01.665854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:01.665876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:01.665892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:01.665915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:01.665931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:01.665954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:01.665969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:01.665992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:01.666009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:01.668060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:01.668092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:01.668121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:01.668138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:01.668162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:01.668180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:00.867 8847.44 IOPS, 34.56 MiB/s [2024-12-06T13:19:49.507Z] 8844.80 IOPS, 34.55 MiB/s [2024-12-06T13:19:49.507Z] 8848.00 IOPS, 34.56 MiB/s [2024-12-06T13:19:49.507Z] 8848.58 IOPS, 34.56 MiB/s [2024-12-06T13:19:49.507Z] 8851.08 IOPS, 34.57 MiB/s [2024-12-06T13:19:49.507Z] 8859.29 IOPS, 34.61 MiB/s [2024-12-06T13:19:49.507Z] 8861.07 IOPS, 34.61 MiB/s [2024-12-06T13:19:49.507Z] [2024-12-06 13:19:08.261877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.261946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.867 [2024-12-06 13:19:08.262794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:00.867 [2024-12-06 13:19:08.262816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.262832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.262854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.262870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.262893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.262909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.262931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.262948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.262970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.262987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.263026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.263363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.263410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.263465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.263525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.263566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.263606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.263647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.868 [2024-12-06 13:19:08.263686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.868 [2024-12-06 13:19:08.263727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.868 [2024-12-06 13:19:08.263771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.868 [2024-12-06 13:19:08.263811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.868 [2024-12-06 13:19:08.263851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.868 [2024-12-06 13:19:08.263891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.868 [2024-12-06 13:19:08.263931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.868 [2024-12-06 13:19:08.263971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.263995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.868 [2024-12-06 13:19:08.264019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.264044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.868 [2024-12-06 13:19:08.264061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.264085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.868 [2024-12-06 13:19:08.264102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.264126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.868 [2024-12-06 13:19:08.264144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:00.868 [2024-12-06 13:19:08.264169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.868 [2024-12-06 13:19:08.264195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.264961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.264984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.265000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.265024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.265040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.265072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.265090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.265114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.265130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.265155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.265171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.265194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.265211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.265235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.265251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.265275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.265291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.265322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.265338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.265361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.265377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.265401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.265417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.265441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.869 [2024-12-06 13:19:08.265471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:00.869 [2024-12-06 13:19:08.265497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.265515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.265539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.265556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.265604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.265622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.265646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.265663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.265696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.265713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.265737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.265753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.265777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.265793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.265843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.265861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.265885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.265902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.265926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.265942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.265966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.265982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.266022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.266061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.870 [2024-12-06 13:19:08.266102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:00.870 [2024-12-06 13:19:08.266780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.870 [2024-12-06 13:19:08.266796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.267958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.267987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.268003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.268032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.268048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.268077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.268094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.268129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.268148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:08.268182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:08.268199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:00.871 8339.06 IOPS, 32.57 MiB/s [2024-12-06T13:19:49.511Z] 8339.47 IOPS, 32.58 MiB/s [2024-12-06T13:19:49.511Z] 8366.33 IOPS, 32.68 MiB/s [2024-12-06T13:19:49.511Z] 8396.95 IOPS, 32.80 MiB/s [2024-12-06T13:19:49.511Z] 8424.35 IOPS, 32.91 MiB/s [2024-12-06T13:19:49.511Z] 8446.71 IOPS, 32.99 MiB/s [2024-12-06T13:19:49.511Z] 8464.59 IOPS, 33.06 MiB/s [2024-12-06T13:19:49.511Z] [2024-12-06 13:19:15.376288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.871 [2024-12-06 13:19:15.376356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:15.376434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.871 [2024-12-06 13:19:15.376457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:15.376501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.871 [2024-12-06 13:19:15.376521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:15.376544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.871 [2024-12-06 13:19:15.376560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:15.376582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.871 [2024-12-06 13:19:15.376598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:15.376620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.871 [2024-12-06 13:19:15.376637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:15.376659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.871 [2024-12-06 13:19:15.376675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:00.871 [2024-12-06 13:19:15.376697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.871 [2024-12-06 13:19:15.376713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.376736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.376752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.376780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.376823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.377956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.377973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.378003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.378019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.378042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.378058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.378081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.378097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.378120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.378136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.378159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.378175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.378199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.378215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.378248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.378266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.378299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.378316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.378340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.378356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.378379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.378395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.378418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.378434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.378471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.378492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:00.872 [2024-12-06 13:19:15.378519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.872 [2024-12-06 13:19:15.378535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.378559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.378575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.378598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.378614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.378637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.378653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.378676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.378692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.378715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.378732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.378755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.378771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.378806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.378823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.378847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.378863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.378886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.378902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.378931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.378949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.378973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.378989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.379028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.379068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.379107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.379146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.379185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.873 [2024-12-06 13:19:15.379224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.873 [2024-12-06 13:19:15.379812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:00.873 [2024-12-06 13:19:15.379837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.379854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.379877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.379893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.379916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.379932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.379955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.379971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.379994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.380964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.380983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.381011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.381027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.381055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.381071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.381100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.381116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.381157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.381175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.381203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.381219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.381247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.381264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:00.874 [2024-12-06 13:19:15.381293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.874 [2024-12-06 13:19:15.381309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.381958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.381975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:00.875 [2024-12-06 13:19:15.382664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.875 [2024-12-06 13:19:15.382680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:00.875 8159.52 IOPS, 31.87 MiB/s [2024-12-06T13:19:49.515Z] 7819.54 IOPS, 30.55 MiB/s [2024-12-06T13:19:49.515Z] 7506.76 IOPS, 29.32 MiB/s [2024-12-06T13:19:49.515Z] 7218.04 IOPS, 28.20 MiB/s [2024-12-06T13:19:49.515Z] 6950.70 IOPS, 27.15 MiB/s [2024-12-06T13:19:49.515Z] 6702.46 IOPS, 26.18 MiB/s [2024-12-06T13:19:49.515Z] 6471.34 IOPS, 25.28 MiB/s [2024-12-06T13:19:49.515Z] 6488.47 IOPS, 25.35 MiB/s [2024-12-06T13:19:49.515Z] 6554.74 IOPS, 25.60 MiB/s [2024-12-06T13:19:49.515Z] 6624.16 IOPS, 25.88 MiB/s [2024-12-06T13:19:49.515Z] 6684.36 IOPS, 26.11 MiB/s [2024-12-06T13:19:49.515Z] 6742.26 IOPS, 26.34 MiB/s [2024-12-06T13:19:49.515Z] 6796.34 IOPS, 26.55 MiB/s [2024-12-06T13:19:49.516Z] [2024-12-06 13:19:28.921589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.921637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.921664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.921683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.921700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.921739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.921758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.921773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.921789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.921817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.921836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.921856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.921871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.921885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.921901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.921916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.921931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.921945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.921961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.921975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.921991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.876 [2024-12-06 13:19:28.922662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.876 [2024-12-06 13:19:28.922686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.922701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.922718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.922733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.922749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.922764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.922780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.922795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.922811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.922826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.922843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.922857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.922874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.922889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.922905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.922919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.922936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.922950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.922966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.922988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.877 [2024-12-06 13:19:28.923479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.923511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.923541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.923571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.923603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.923633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.923664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.877 [2024-12-06 13:19:28.923680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.877 [2024-12-06 13:19:28.923695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.923711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.923725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.923750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.923765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.923781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.923803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.923821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.923836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.923852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.923867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.923883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.923897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.923913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.923928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.923944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.923959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.923975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.923989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.878 [2024-12-06 13:19:28.924473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.878 [2024-12-06 13:19:28.924505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.878 [2024-12-06 13:19:28.924536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.878 [2024-12-06 13:19:28.924567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.878 [2024-12-06 13:19:28.924605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.878 [2024-12-06 13:19:28.924638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.878 [2024-12-06 13:19:28.924668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.878 [2024-12-06 13:19:28.924700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.878 [2024-12-06 13:19:28.924716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.878 [2024-12-06 13:19:28.924731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.924747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.924761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.924778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.924805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.924821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.924836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.924852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.924867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.924883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.924898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.924914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.924929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.924945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.924960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.924976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.924990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.879 [2024-12-06 13:19:28.925604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.879 [2024-12-06 13:19:28.925619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.880 [2024-12-06 13:19:28.925635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.880 [2024-12-06 13:19:28.925650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.880 [2024-12-06 13:19:28.925666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.880 [2024-12-06 13:19:28.925680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.880 [2024-12-06 13:19:28.925696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.880 [2024-12-06 13:19:28.925711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.880 [2024-12-06 13:19:28.925733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.880 [2024-12-06 13:19:28.925747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.880 [2024-12-06 13:19:28.925763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.880 [2024-12-06 13:19:28.925778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.880 [2024-12-06 13:19:28.925826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.880 [2024-12-06 13:19:28.925845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.880 [2024-12-06 13:19:28.925857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116840 len:8 PRP1 0x0 PRP2 0x0 00:26:00.880 [2024-12-06 13:19:28.925885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.880 [2024-12-06 13:19:28.926058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.880 [2024-12-06 13:19:28.926086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.880 [2024-12-06 13:19:28.926103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.880 [2024-12-06 13:19:28.926118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.880 [2024-12-06 13:19:28.926133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.880 [2024-12-06 13:19:28.926148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.880 [2024-12-06 13:19:28.926163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.880 [2024-12-06 13:19:28.926177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.880 [2024-12-06 13:19:28.926191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54ff30 is same with the state(6) to be set 00:26:00.880 [2024-12-06 13:19:28.927487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:00.880 [2024-12-06 13:19:28.927537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54ff30 (9): Bad file descriptor 00:26:00.880 [2024-12-06 13:19:28.927654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.880 [2024-12-06 13:19:28.927685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54ff30 with addr=10.0.0.3, port=4421 00:26:00.880 [2024-12-06 13:19:28.927704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54ff30 is same with the state(6) to be set 00:26:00.880 [2024-12-06 13:19:28.927730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54ff30 (9): Bad file descriptor 00:26:00.880 [2024-12-06 13:19:28.927754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:00.880 [2024-12-06 13:19:28.927770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:00.880 [2024-12-06 13:19:28.927785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:00.880 [2024-12-06 13:19:28.927801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:00.880 [2024-12-06 13:19:28.927826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:00.880 6843.47 IOPS, 26.73 MiB/s [2024-12-06T13:19:49.520Z] 6892.81 IOPS, 26.93 MiB/s [2024-12-06T13:19:49.520Z] 6936.55 IOPS, 27.10 MiB/s [2024-12-06T13:19:49.520Z] 6982.26 IOPS, 27.27 MiB/s [2024-12-06T13:19:49.520Z] 7022.98 IOPS, 27.43 MiB/s [2024-12-06T13:19:49.520Z] 7063.02 IOPS, 27.59 MiB/s [2024-12-06T13:19:49.520Z] 7094.79 IOPS, 27.71 MiB/s [2024-12-06T13:19:49.520Z] 7123.93 IOPS, 27.83 MiB/s [2024-12-06T13:19:49.520Z] 7155.91 IOPS, 27.95 MiB/s [2024-12-06T13:19:49.520Z] 7187.71 IOPS, 28.08 MiB/s [2024-12-06T13:19:49.520Z] [2024-12-06 13:19:39.024772] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:00.880 7213.78 IOPS, 28.18 MiB/s [2024-12-06T13:19:49.520Z] 7244.15 IOPS, 28.30 MiB/s [2024-12-06T13:19:49.520Z] 7271.85 IOPS, 28.41 MiB/s [2024-12-06T13:19:49.520Z] 7300.14 IOPS, 28.52 MiB/s [2024-12-06T13:19:49.520Z] 7324.98 IOPS, 28.61 MiB/s [2024-12-06T13:19:49.520Z] 7348.63 IOPS, 28.71 MiB/s [2024-12-06T13:19:49.520Z] 7369.73 IOPS, 28.79 MiB/s [2024-12-06T13:19:49.520Z] 7389.98 IOPS, 28.87 MiB/s [2024-12-06T13:19:49.520Z] 7408.94 IOPS, 28.94 MiB/s [2024-12-06T13:19:49.520Z] 7429.09 IOPS, 29.02 MiB/s [2024-12-06T13:19:49.520Z] Received shutdown signal, test time was about 56.014305 seconds 00:26:00.880 00:26:00.880 Latency(us) 00:26:00.880 [2024-12-06T13:19:49.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.880 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:00.880 Verification LBA range: start 0x0 length 0x4000 00:26:00.880 Nvme0n1 : 56.01 7445.80 29.09 0.00 0.00 17158.90 565.99 7046430.72 00:26:00.880 [2024-12-06T13:19:49.520Z] =================================================================================================================== 00:26:00.880 [2024-12-06T13:19:49.520Z] Total : 7445.80 29.09 0.00 0.00 17158.90 565.99 7046430.72 00:26:00.880 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.166 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:26:01.166 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:01.166 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:26:01.166 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:01.166 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:26:01.166 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:01.166 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:26:01.166 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:01.166 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:01.166 rmmod nvme_tcp 00:26:01.424 rmmod nvme_fabrics 00:26:01.424 rmmod nvme_keyring 00:26:01.424 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:01.424 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:26:01.424 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:26:01.424 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 96261 ']' 00:26:01.424 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 96261 00:26:01.424 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 96261 ']' 00:26:01.424 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 96261 00:26:01.424 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:26:01.425 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.425 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96261 00:26:01.425 killing process with pid 96261 00:26:01.425 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:01.425 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:01.425 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96261' 00:26:01.425 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 96261 00:26:01.425 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 96261 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:01.683 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:01.940 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:01.940 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.940 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.940 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.940 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:26:01.940 00:26:01.941 real 1m2.236s 00:26:01.941 user 2m57.437s 00:26:01.941 sys 0m13.397s 00:26:01.941 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.941 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:01.941 ************************************ 00:26:01.941 END TEST nvmf_host_multipath 00:26:01.941 ************************************ 00:26:01.941 13:19:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:01.941 13:19:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:01.941 13:19:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.941 13:19:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.941 ************************************ 00:26:01.941 START TEST nvmf_timeout 00:26:01.941 ************************************ 00:26:01.941 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:01.941 * Looking for test storage... 00:26:01.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:01.941 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:01.941 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:01.941 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:02.198 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:02.198 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:02.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.199 --rc genhtml_branch_coverage=1 00:26:02.199 --rc genhtml_function_coverage=1 00:26:02.199 --rc genhtml_legend=1 00:26:02.199 --rc geninfo_all_blocks=1 00:26:02.199 --rc geninfo_unexecuted_blocks=1 00:26:02.199 00:26:02.199 ' 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:02.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.199 --rc genhtml_branch_coverage=1 00:26:02.199 --rc genhtml_function_coverage=1 00:26:02.199 --rc genhtml_legend=1 00:26:02.199 --rc geninfo_all_blocks=1 00:26:02.199 --rc geninfo_unexecuted_blocks=1 00:26:02.199 00:26:02.199 ' 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:02.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.199 --rc genhtml_branch_coverage=1 00:26:02.199 --rc genhtml_function_coverage=1 00:26:02.199 --rc genhtml_legend=1 00:26:02.199 --rc geninfo_all_blocks=1 00:26:02.199 --rc geninfo_unexecuted_blocks=1 00:26:02.199 00:26:02.199 ' 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:02.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.199 --rc genhtml_branch_coverage=1 00:26:02.199 --rc genhtml_function_coverage=1 00:26:02.199 --rc genhtml_legend=1 00:26:02.199 --rc geninfo_all_blocks=1 00:26:02.199 --rc geninfo_unexecuted_blocks=1 00:26:02.199 00:26:02.199 ' 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.199 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.200 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:02.200 Cannot find device "nvmf_init_br" 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:02.200 Cannot find device "nvmf_init_br2" 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:02.200 Cannot find device "nvmf_tgt_br" 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:02.200 Cannot find device "nvmf_tgt_br2" 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:02.200 Cannot find device "nvmf_init_br" 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:02.200 Cannot find device "nvmf_init_br2" 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:02.200 Cannot find device "nvmf_tgt_br" 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:02.200 Cannot find device "nvmf_tgt_br2" 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:02.200 Cannot find device "nvmf_br" 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:02.200 Cannot find device "nvmf_init_if" 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:02.200 Cannot find device "nvmf_init_if2" 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:02.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:02.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:02.200 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:02.201 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:02.201 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:02.201 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:02.459 13:19:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:02.459 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:02.459 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:26:02.459 00:26:02.459 --- 10.0.0.3 ping statistics --- 00:26:02.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.459 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:02.459 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:02.459 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:26:02.459 00:26:02.459 --- 10.0.0.4 ping statistics --- 00:26:02.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.459 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:02.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:26:02.459 00:26:02.459 --- 10.0.0.1 ping statistics --- 00:26:02.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.459 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:02.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:26:02.459 00:26:02.459 --- 10.0.0.2 ping statistics --- 00:26:02.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.459 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=97679 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 97679 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97679 ']' 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.459 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.460 13:19:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:02.718 [2024-12-06 13:19:51.130026] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:26:02.718 [2024-12-06 13:19:51.130738] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.718 [2024-12-06 13:19:51.280077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:02.718 [2024-12-06 13:19:51.338047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.718 [2024-12-06 13:19:51.338096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.718 [2024-12-06 13:19:51.338107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.718 [2024-12-06 13:19:51.338115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.718 [2024-12-06 13:19:51.338123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.718 [2024-12-06 13:19:51.339308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.718 [2024-12-06 13:19:51.339319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.654 13:19:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.654 13:19:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:03.654 13:19:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.654 13:19:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.654 13:19:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.654 13:19:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.654 13:19:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:03.654 13:19:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:03.912 [2024-12-06 13:19:52.496098] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.912 13:19:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:04.480 Malloc0 00:26:04.480 13:19:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:04.738 13:19:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:04.996 13:19:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:05.255 [2024-12-06 13:19:53.701159] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:05.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:05.255 13:19:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=97776 00:26:05.255 13:19:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:05.255 13:19:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 97776 /var/tmp/bdevperf.sock 00:26:05.255 13:19:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97776 ']' 00:26:05.255 13:19:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:05.255 13:19:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.255 13:19:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:05.255 13:19:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.255 13:19:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:05.255 [2024-12-06 13:19:53.769984] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:26:05.255 [2024-12-06 13:19:53.770074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97776 ] 00:26:05.513 [2024-12-06 13:19:53.918681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.513 [2024-12-06 13:19:53.981663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.459 13:19:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.459 13:19:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:06.459 13:19:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:06.459 13:19:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:07.036 NVMe0n1 00:26:07.036 13:19:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97818 00:26:07.037 13:19:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:07.037 13:19:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:26:07.037 Running I/O for 10 seconds... 00:26:07.970 13:19:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:08.231 9160.00 IOPS, 35.78 MiB/s [2024-12-06T13:19:56.871Z] [2024-12-06 13:19:56.685414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.231 [2024-12-06 13:19:56.685483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.231 [2024-12-06 13:19:56.685495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.231 [2024-12-06 13:19:56.685503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.231 [2024-12-06 13:19:56.685511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.685851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc0120 is same with the state(6) to be set 00:26:08.232 [2024-12-06 13:19:56.687348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.232 [2024-12-06 13:19:56.687393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.687417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.232 [2024-12-06 13:19:56.687428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.687440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.232 [2024-12-06 13:19:56.687466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.687481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.232 [2024-12-06 13:19:56.687490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.687502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.232 [2024-12-06 13:19:56.687511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.687522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.232 [2024-12-06 13:19:56.687532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.687543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.232 [2024-12-06 13:19:56.687553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.687564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.232 [2024-12-06 13:19:56.687573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.687584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.232 [2024-12-06 13:19:56.687594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.687606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.232 [2024-12-06 13:19:56.687615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.687631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.232 [2024-12-06 13:19:56.687643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.687771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.232 [2024-12-06 13:19:56.687796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.687810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.232 [2024-12-06 13:19:56.687819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.687831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.232 [2024-12-06 13:19:56.687841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.688106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.232 [2024-12-06 13:19:56.688121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.688133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.232 [2024-12-06 13:19:56.688142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.688155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.232 [2024-12-06 13:19:56.688173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.688184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.232 [2024-12-06 13:19:56.688194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.688205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.232 [2024-12-06 13:19:56.688214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.688226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.232 [2024-12-06 13:19:56.688235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.688246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.232 [2024-12-06 13:19:56.688624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.232 [2024-12-06 13:19:56.688643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.688658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.688669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.688678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.688690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.688699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.688710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.688720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.688730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.688739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.688755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.688845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.688863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.688873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.688884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.688893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.688973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.688988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.689000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.689010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.689022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.689031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.689042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.689176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.689391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.689409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.689421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.689431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.689442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.689545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.689563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.689574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.689585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.689595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.689694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.689707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.689718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.689729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.689741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.689960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.689978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.689988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.690001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.690010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.690022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.690149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.690164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.690308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.690401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.233 [2024-12-06 13:19:56.690414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.690425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.690435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.690447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.690470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.690482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.690491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.690502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.690511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.690523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.690539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.690682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.690953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.691034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.691047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.691058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.691067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.691079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.691089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.691229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.691244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.691380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.691507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.691656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.691747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.691763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.691773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.691784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.691794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.233 [2024-12-06 13:19:56.691805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.233 [2024-12-06 13:19:56.691935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.692088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.692184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.692199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.692209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.692221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.692230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.692242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.692251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.692262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.692271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.692282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.692291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.692565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.692640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.692654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.692666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.692677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.692687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.692699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.692707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.692725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.692734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.692745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.693926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.693936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.694068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.694088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.694362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.694474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.694491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.694501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.694620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.694640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.694773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.694921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.695064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.234 [2024-12-06 13:19:56.695205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.695365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.234 [2024-12-06 13:19:56.695469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.695485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.234 [2024-12-06 13:19:56.695495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.695507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.234 [2024-12-06 13:19:56.695517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.695528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.234 [2024-12-06 13:19:56.695537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.695549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.234 [2024-12-06 13:19:56.695900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.695917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.234 [2024-12-06 13:19:56.695927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.234 [2024-12-06 13:19:56.695939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.695949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.695960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.695969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.695981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.695990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.696001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.696323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.696347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.696358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.696370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.696380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.696391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.696401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.696412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.696421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.696432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.696544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.696573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.696707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.696726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.696881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.696972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.696983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.696995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.697004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.697016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.697025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.697036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.697045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.697056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.697065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.697414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.697425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.697436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.697446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.697472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.697483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.697494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.697503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.697514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.697529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.697761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.697781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.697794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.698044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.698069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.698082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.698174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.698189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.698201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.698210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.698223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.235 [2024-12-06 13:19:56.698501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.698767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.235 [2024-12-06 13:19:56.698787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.235 [2024-12-06 13:19:56.698913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87176 len:8 PRP1 0x0 PRP2 0x0 00:26:08.235 [2024-12-06 13:19:56.699026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.699418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.235 [2024-12-06 13:19:56.699462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.699477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.235 [2024-12-06 13:19:56.699486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.699497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.235 [2024-12-06 13:19:56.699506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.699516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.235 [2024-12-06 13:19:56.699526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.235 [2024-12-06 13:19:56.699535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2df30 is same with the state(6) to be set 00:26:08.235 [2024-12-06 13:19:56.699943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:08.235 [2024-12-06 13:19:56.699981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2df30 (9): Bad file descriptor 00:26:08.235 [2024-12-06 13:19:56.700259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.235 [2024-12-06 13:19:56.700294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2df30 with addr=10.0.0.3, port=4420 00:26:08.235 [2024-12-06 13:19:56.700307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2df30 is same with the state(6) to be set 00:26:08.235 [2024-12-06 13:19:56.700328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2df30 (9): Bad file descriptor 00:26:08.235 [2024-12-06 13:19:56.700346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:08.235 [2024-12-06 13:19:56.700584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:08.235 [2024-12-06 13:19:56.700599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:08.235 [2024-12-06 13:19:56.700617] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:08.235 [2024-12-06 13:19:56.700628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:08.235 13:19:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:26:10.105 5385.00 IOPS, 21.04 MiB/s [2024-12-06T13:19:58.745Z] 3590.00 IOPS, 14.02 MiB/s [2024-12-06T13:19:58.745Z] [2024-12-06 13:19:58.700893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.105 [2024-12-06 13:19:58.701000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2df30 with addr=10.0.0.3, port=4420 00:26:10.105 [2024-12-06 13:19:58.701019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2df30 is same with the state(6) to be set 00:26:10.105 [2024-12-06 13:19:58.701047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2df30 (9): Bad file descriptor 00:26:10.105 [2024-12-06 13:19:58.701068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:10.105 [2024-12-06 13:19:58.701080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:10.105 [2024-12-06 13:19:58.701092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:10.105 [2024-12-06 13:19:58.701104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:10.105 [2024-12-06 13:19:58.701116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:10.105 13:19:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:26:10.105 13:19:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:10.105 13:19:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:10.672 13:19:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:10.672 13:19:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:26:10.672 13:19:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:10.672 13:19:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:10.672 13:19:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:10.672 13:19:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:26:12.303 2692.50 IOPS, 10.52 MiB/s [2024-12-06T13:20:00.943Z] 2154.00 IOPS, 8.41 MiB/s [2024-12-06T13:20:00.943Z] [2024-12-06 13:20:00.701244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.303 [2024-12-06 13:20:00.701323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2df30 with addr=10.0.0.3, port=4420 00:26:12.303 [2024-12-06 13:20:00.701340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2df30 is same with the state(6) to be set 00:26:12.303 [2024-12-06 13:20:00.701366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2df30 (9): Bad file descriptor 00:26:12.303 [2024-12-06 13:20:00.701402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:12.304 [2024-12-06 13:20:00.701415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:12.304 [2024-12-06 13:20:00.701427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:12.304 [2024-12-06 13:20:00.701439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:12.304 [2024-12-06 13:20:00.701463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:14.248 1795.00 IOPS, 7.01 MiB/s [2024-12-06T13:20:02.888Z] 1538.57 IOPS, 6.01 MiB/s [2024-12-06T13:20:02.888Z] [2024-12-06 13:20:02.701524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:14.248 [2024-12-06 13:20:02.701593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:14.248 [2024-12-06 13:20:02.701607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:14.248 [2024-12-06 13:20:02.701633] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:26:14.248 [2024-12-06 13:20:02.701646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:15.182 1346.25 IOPS, 5.26 MiB/s 00:26:15.182 Latency(us) 00:26:15.182 [2024-12-06T13:20:03.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.182 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:15.182 Verification LBA range: start 0x0 length 0x4000 00:26:15.182 NVMe0n1 : 8.20 1314.03 5.13 15.62 0.00 96272.70 2100.13 7046430.72 00:26:15.182 [2024-12-06T13:20:03.822Z] =================================================================================================================== 00:26:15.182 [2024-12-06T13:20:03.822Z] Total : 1314.03 5.13 15.62 0.00 96272.70 2100.13 7046430.72 00:26:15.182 { 00:26:15.182 "results": [ 00:26:15.182 { 00:26:15.182 "job": "NVMe0n1", 00:26:15.182 "core_mask": "0x4", 00:26:15.182 "workload": "verify", 00:26:15.182 "status": "finished", 00:26:15.182 "verify_range": { 00:26:15.182 "start": 0, 00:26:15.182 "length": 16384 00:26:15.182 }, 00:26:15.182 "queue_depth": 128, 00:26:15.182 "io_size": 4096, 00:26:15.182 "runtime": 8.19614, 00:26:15.182 "iops": 1314.0331912339223, 00:26:15.182 "mibps": 5.132942153257509, 00:26:15.182 "io_failed": 128, 00:26:15.182 "io_timeout": 0, 00:26:15.182 "avg_latency_us": 96272.70408648792, 00:26:15.182 "min_latency_us": 2100.130909090909, 00:26:15.182 "max_latency_us": 7046430.72 00:26:15.182 } 00:26:15.182 ], 00:26:15.182 "core_count": 1 00:26:15.182 } 00:26:15.750 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:26:15.750 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:15.751 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:16.008 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:16.008 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:26:16.008 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:16.008 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:16.264 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:16.264 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 97818 00:26:16.264 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 97776 00:26:16.264 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97776 ']' 00:26:16.264 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97776 00:26:16.264 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:16.264 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:16.264 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97776 00:26:16.521 killing process with pid 97776 00:26:16.521 Received shutdown signal, test time was about 9.423596 seconds 00:26:16.521 00:26:16.521 Latency(us) 00:26:16.521 [2024-12-06T13:20:05.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.521 [2024-12-06T13:20:05.161Z] =================================================================================================================== 00:26:16.521 [2024-12-06T13:20:05.161Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.521 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:16.521 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:16.521 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97776' 00:26:16.521 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97776 00:26:16.521 13:20:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97776 00:26:16.521 13:20:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:16.799 [2024-12-06 13:20:05.418124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:17.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:17.061 13:20:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:17.061 13:20:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97977 00:26:17.061 13:20:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97977 /var/tmp/bdevperf.sock 00:26:17.061 13:20:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97977 ']' 00:26:17.061 13:20:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:17.061 13:20:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.061 13:20:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:17.061 13:20:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.061 13:20:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.061 [2024-12-06 13:20:05.486694] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:26:17.061 [2024-12-06 13:20:05.486779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97977 ] 00:26:17.061 [2024-12-06 13:20:05.632998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.061 [2024-12-06 13:20:05.695267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.318 13:20:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.318 13:20:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:17.318 13:20:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:17.575 13:20:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:17.833 NVMe0n1 00:26:17.833 13:20:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=98011 00:26:17.833 13:20:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:17.833 13:20:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:26:18.089 Running I/O for 10 seconds... 00:26:19.022 13:20:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:19.283 9000.00 IOPS, 35.16 MiB/s [2024-12-06T13:20:07.923Z] [2024-12-06 13:20:07.748729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe186b0 is same with the state(6) to be set 00:26:19.283 [2024-12-06 13:20:07.748783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe186b0 is same with the state(6) to be set 00:26:19.283 [2024-12-06 13:20:07.748794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe186b0 is same with the state(6) to be set 00:26:19.283 [2024-12-06 13:20:07.748803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe186b0 is same with the state(6) to be set 00:26:19.283 [2024-12-06 13:20:07.748811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe186b0 is same with the state(6) to be set 00:26:19.283 [2024-12-06 13:20:07.748820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe186b0 is same with the state(6) to be set 00:26:19.283 [2024-12-06 13:20:07.750097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.283 [2024-12-06 13:20:07.750304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.283 [2024-12-06 13:20:07.750869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.283 [2024-12-06 13:20:07.750889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.750900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.283 [2024-12-06 13:20:07.751324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.751342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.283 [2024-12-06 13:20:07.751354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.751365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.283 [2024-12-06 13:20:07.751375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.283 [2024-12-06 13:20:07.751386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.751396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.751407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.751417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.751430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.751440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.751565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.751584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.751595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.751605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.751616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.751626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.751731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.751744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.751755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.751765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.751776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.751785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.751796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.751806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.751907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.751923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.751935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.751945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.751957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.751966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.751978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.751987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.752216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.752242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.752257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.752268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.752280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.752291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.752303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.752312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.752324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.752333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.752345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.752354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.752365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.752665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.752681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.752691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.752703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.752713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.752725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.752734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.752745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.752755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.752766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.752775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.752902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.284 [2024-12-06 13:20:07.753073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.753180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.753192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.753202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.753212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.753320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.753335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.753346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.753356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.753367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.753377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.753517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.753658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.753800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.753919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.753935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.753945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.753957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.753967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.753979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.753988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.754214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.754233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.754246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.754256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.284 [2024-12-06 13:20:07.754268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.284 [2024-12-06 13:20:07.754277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.754289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.754299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.754310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.754319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.754330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.754340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.754435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.754462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.754476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.754545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.754562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.754571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.754583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.754592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.754603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.754718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.754737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.754753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.754764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.754884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.755009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.755032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.755133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.755154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.755174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.755303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.755327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.755399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.755411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.755421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.755432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.755442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.755466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.755588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.755604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.755614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.755844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.755866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.755880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.755891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.755902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.756153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.756169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.756179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.756296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.756311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.756322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.756332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.756464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.756577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.756596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.756606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.756617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.756721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.756743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.756754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.756765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.756865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.756881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.756891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.756903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.756912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.756923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.756932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.757007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.285 [2024-12-06 13:20:07.757019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.285 [2024-12-06 13:20:07.757030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.757893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.757997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.758014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.758025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.758035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.758046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.758116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.758130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.758141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.758152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.286 [2024-12-06 13:20:07.758162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.758301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.286 [2024-12-06 13:20:07.758322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86456 len:8 PRP1 0x0 PRP2 0x0 00:26:19.286 [2024-12-06 13:20:07.758333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.758585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.286 [2024-12-06 13:20:07.758597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.286 [2024-12-06 13:20:07.758607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86464 len:8 PRP1 0x0 PRP2 0x0 00:26:19.286 [2024-12-06 13:20:07.758617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.758626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.286 [2024-12-06 13:20:07.758635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.286 [2024-12-06 13:20:07.758643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86472 len:8 PRP1 0x0 PRP2 0x0 00:26:19.286 [2024-12-06 13:20:07.758651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.758661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.286 [2024-12-06 13:20:07.758669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.286 [2024-12-06 13:20:07.758677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86480 len:8 PRP1 0x0 PRP2 0x0 00:26:19.286 [2024-12-06 13:20:07.758685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.758767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.286 [2024-12-06 13:20:07.758780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.286 [2024-12-06 13:20:07.758789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86488 len:8 PRP1 0x0 PRP2 0x0 00:26:19.286 [2024-12-06 13:20:07.758797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.758807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.286 [2024-12-06 13:20:07.758814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.286 [2024-12-06 13:20:07.758822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86496 len:8 PRP1 0x0 PRP2 0x0 00:26:19.286 [2024-12-06 13:20:07.758937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.758954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.286 [2024-12-06 13:20:07.758962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.286 [2024-12-06 13:20:07.758970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86504 len:8 PRP1 0x0 PRP2 0x0 00:26:19.286 [2024-12-06 13:20:07.759104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.759224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.286 [2024-12-06 13:20:07.759242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.286 [2024-12-06 13:20:07.759348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86512 len:8 PRP1 0x0 PRP2 0x0 00:26:19.286 [2024-12-06 13:20:07.759368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.759380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.286 [2024-12-06 13:20:07.759389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.286 [2024-12-06 13:20:07.759496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86520 len:8 PRP1 0x0 PRP2 0x0 00:26:19.286 [2024-12-06 13:20:07.759510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.759521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.286 [2024-12-06 13:20:07.759529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.286 [2024-12-06 13:20:07.759537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86528 len:8 PRP1 0x0 PRP2 0x0 00:26:19.286 [2024-12-06 13:20:07.759656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.759668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.286 [2024-12-06 13:20:07.759677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.286 [2024-12-06 13:20:07.759686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86536 len:8 PRP1 0x0 PRP2 0x0 00:26:19.286 [2024-12-06 13:20:07.759812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.759823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.286 [2024-12-06 13:20:07.759949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.286 [2024-12-06 13:20:07.759963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86544 len:8 PRP1 0x0 PRP2 0x0 00:26:19.286 [2024-12-06 13:20:07.759972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.286 [2024-12-06 13:20:07.759983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.287 [2024-12-06 13:20:07.759991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.287 [2024-12-06 13:20:07.760052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86552 len:8 PRP1 0x0 PRP2 0x0 00:26:19.287 [2024-12-06 13:20:07.760063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.287 [2024-12-06 13:20:07.760073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.287 [2024-12-06 13:20:07.760081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.287 [2024-12-06 13:20:07.760089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86560 len:8 PRP1 0x0 PRP2 0x0 00:26:19.287 [2024-12-06 13:20:07.760098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.287 [2024-12-06 13:20:07.760107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.287 [2024-12-06 13:20:07.760115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.287 [2024-12-06 13:20:07.760123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86568 len:8 PRP1 0x0 PRP2 0x0 00:26:19.287 [2024-12-06 13:20:07.760132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.287 [2024-12-06 13:20:07.760246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.287 [2024-12-06 13:20:07.760261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.287 [2024-12-06 13:20:07.760269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86576 len:8 PRP1 0x0 PRP2 0x0 00:26:19.287 [2024-12-06 13:20:07.760278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.287 [2024-12-06 13:20:07.760626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.287 [2024-12-06 13:20:07.760652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.287 [2024-12-06 13:20:07.760665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.287 [2024-12-06 13:20:07.760674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.287 [2024-12-06 13:20:07.760685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.287 [2024-12-06 13:20:07.760694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.287 [2024-12-06 13:20:07.760704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.287 [2024-12-06 13:20:07.760713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.287 [2024-12-06 13:20:07.760722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244af30 is same with the state(6) to be set 00:26:19.287 [2024-12-06 13:20:07.761238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.287 [2024-12-06 13:20:07.761276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244af30 (9): Bad file descriptor 00:26:19.287 [2024-12-06 13:20:07.761475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.287 [2024-12-06 13:20:07.761522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244af30 with addr=10.0.0.3, port=4420 00:26:19.287 [2024-12-06 13:20:07.761535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244af30 is same with the state(6) to be set 00:26:19.287 [2024-12-06 13:20:07.761555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244af30 (9): Bad file descriptor 00:26:19.287 [2024-12-06 13:20:07.761572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.287 [2024-12-06 13:20:07.761840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.287 [2024-12-06 13:20:07.761868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.287 [2024-12-06 13:20:07.761880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.287 [2024-12-06 13:20:07.761893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.287 13:20:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:26:20.225 5388.50 IOPS, 21.05 MiB/s [2024-12-06T13:20:08.865Z] [2024-12-06 13:20:08.762042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.225 [2024-12-06 13:20:08.762110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244af30 with addr=10.0.0.3, port=4420 00:26:20.225 [2024-12-06 13:20:08.762128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244af30 is same with the state(6) to be set 00:26:20.225 [2024-12-06 13:20:08.762155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244af30 (9): Bad file descriptor 00:26:20.225 [2024-12-06 13:20:08.762175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.225 [2024-12-06 13:20:08.762186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.225 [2024-12-06 13:20:08.762198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.225 [2024-12-06 13:20:08.762210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.225 [2024-12-06 13:20:08.762222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.225 13:20:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:20.484 [2024-12-06 13:20:09.018205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:20.484 13:20:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 98011 00:26:21.309 3592.33 IOPS, 14.03 MiB/s [2024-12-06T13:20:09.949Z] [2024-12-06 13:20:09.780553] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:23.183 2694.25 IOPS, 10.52 MiB/s [2024-12-06T13:20:12.759Z] 3607.00 IOPS, 14.09 MiB/s [2024-12-06T13:20:13.717Z] 4537.00 IOPS, 17.72 MiB/s [2024-12-06T13:20:14.754Z] 5199.86 IOPS, 20.31 MiB/s [2024-12-06T13:20:15.697Z] 5708.38 IOPS, 22.30 MiB/s [2024-12-06T13:20:16.651Z] 6106.22 IOPS, 23.85 MiB/s [2024-12-06T13:20:16.651Z] 6420.80 IOPS, 25.08 MiB/s 00:26:28.011 Latency(us) 00:26:28.011 [2024-12-06T13:20:16.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.011 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:28.011 Verification LBA range: start 0x0 length 0x4000 00:26:28.011 NVMe0n1 : 10.01 6423.76 25.09 0.00 0.00 19896.45 2189.50 3035150.89 00:26:28.011 [2024-12-06T13:20:16.651Z] =================================================================================================================== 00:26:28.011 [2024-12-06T13:20:16.651Z] Total : 6423.76 25.09 0.00 0.00 19896.45 2189.50 3035150.89 00:26:28.011 { 00:26:28.011 "results": [ 00:26:28.011 { 00:26:28.011 "job": "NVMe0n1", 00:26:28.011 "core_mask": "0x4", 00:26:28.011 "workload": "verify", 00:26:28.011 "status": "finished", 00:26:28.011 "verify_range": { 00:26:28.011 "start": 0, 00:26:28.011 "length": 16384 00:26:28.011 }, 00:26:28.011 "queue_depth": 128, 00:26:28.011 "io_size": 4096, 00:26:28.011 "runtime": 10.007534, 00:26:28.011 "iops": 6423.760338960627, 00:26:28.011 "mibps": 25.09281382406495, 00:26:28.011 "io_failed": 0, 00:26:28.011 "io_timeout": 0, 00:26:28.011 "avg_latency_us": 19896.454294417275, 00:26:28.011 "min_latency_us": 2189.498181818182, 00:26:28.011 "max_latency_us": 3035150.8945454545 00:26:28.011 } 00:26:28.011 ], 00:26:28.011 "core_count": 1 00:26:28.011 } 00:26:28.011 13:20:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=98128 00:26:28.011 13:20:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:28.011 13:20:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:26:28.270 Running I/O for 10 seconds... 00:26:29.206 13:20:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:29.468 8933.00 IOPS, 34.89 MiB/s [2024-12-06T13:20:18.108Z] [2024-12-06 13:20:17.915067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.468 [2024-12-06 13:20:17.915421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.915993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.469 [2024-12-06 13:20:17.916179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.470 [2024-12-06 13:20:17.916188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.470 [2024-12-06 13:20:17.916196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.470 [2024-12-06 13:20:17.916204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16d40 is same with the state(6) to be set 00:26:29.470 [2024-12-06 13:20:17.917194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.917237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.917260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.917271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.917286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.917296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.917307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.917316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.917327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.917337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.917348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.917462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.917482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.917492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.917772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.917893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.917907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.917917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.917929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.917938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.917949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.917959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.917971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.917980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.917991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.918933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.918944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.919180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.919195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.919206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.919217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.919227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.919239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.919249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.919260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.919269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.919280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.919407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.919521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.919537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.919549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.919559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.919699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.919712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.919819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.919839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.470 [2024-12-06 13:20:17.919851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.470 [2024-12-06 13:20:17.919861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.919975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.919988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.919999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.920136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.920279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.920404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.920426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.920437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.920575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.920682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.920696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.920706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.920718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.920728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.920739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.920748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.920759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.920769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.920781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.920898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.920914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.921163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.921188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.921198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.921209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.921219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.921230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.921239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.921250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.921259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.921402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.921555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.921698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.921820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.921838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.921848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.922104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.922230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.922249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.922259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.922407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.922552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.922645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.922658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.922670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.922679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.922690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.922699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.922710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.922719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.922730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.922739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.922861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.922875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.923132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.923153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.923165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.923175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.923186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.923195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.923206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.923216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.923227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.923354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.923444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.923470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.923482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.471 [2024-12-06 13:20:17.923492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.923622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.471 [2024-12-06 13:20:17.923642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.923787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.471 [2024-12-06 13:20:17.923890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.923905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.471 [2024-12-06 13:20:17.923915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.923926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.471 [2024-12-06 13:20:17.923936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.923947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.471 [2024-12-06 13:20:17.923957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.471 [2024-12-06 13:20:17.923967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.923977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.924103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.924237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.924264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.924521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.924546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.924778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.924802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.924813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.924824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.924833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.924958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.924974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.924985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.924995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.925139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.925278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.925422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.925520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.925535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.925545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.925556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.925566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.925686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.925706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.925719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.925997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.926108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.926129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.926142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.926151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.926278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.926290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.926301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.926566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.926590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.926600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.926613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.926622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.926633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.926643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.926653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.926663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.926674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.926683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.926806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.926819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.927071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.927091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.927103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.472 [2024-12-06 13:20:17.927113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.927124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.472 [2024-12-06 13:20:17.927133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.927145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.472 [2024-12-06 13:20:17.927162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.927173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.472 [2024-12-06 13:20:17.927183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.927385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.472 [2024-12-06 13:20:17.927407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.927420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.472 [2024-12-06 13:20:17.927430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.927441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.472 [2024-12-06 13:20:17.927565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.927584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.472 [2024-12-06 13:20:17.927594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.927708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.472 [2024-12-06 13:20:17.927725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.927737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.472 [2024-12-06 13:20:17.927869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.928020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.472 [2024-12-06 13:20:17.928165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.928293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.472 [2024-12-06 13:20:17.928313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.928445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.472 [2024-12-06 13:20:17.928548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.928564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.472 [2024-12-06 13:20:17.928573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.472 [2024-12-06 13:20:17.928584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.473 [2024-12-06 13:20:17.928594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.928702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.473 [2024-12-06 13:20:17.928718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.928731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.473 [2024-12-06 13:20:17.928740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.928873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.473 [2024-12-06 13:20:17.929022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.929166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.473 [2024-12-06 13:20:17.929284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.929305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.473 [2024-12-06 13:20:17.929315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.929435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.473 [2024-12-06 13:20:17.929463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.929702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.473 [2024-12-06 13:20:17.929723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.929736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.473 [2024-12-06 13:20:17.929745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.929756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.473 [2024-12-06 13:20:17.930142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.930166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.473 [2024-12-06 13:20:17.930177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.930438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.473 [2024-12-06 13:20:17.930462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.473 [2024-12-06 13:20:17.930472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84464 len:8 PRP1 0x0 PRP2 0x0 00:26:29.473 [2024-12-06 13:20:17.930604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.930927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.473 [2024-12-06 13:20:17.930957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.930968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.473 [2024-12-06 13:20:17.930977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.930987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.473 [2024-12-06 13:20:17.930996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.931007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.473 [2024-12-06 13:20:17.931016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.473 [2024-12-06 13:20:17.931150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244af30 is same with the state(6) to be set 00:26:29.473 [2024-12-06 13:20:17.931694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:29.473 [2024-12-06 13:20:17.931733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244af30 (9): Bad file descriptor 00:26:29.473 [2024-12-06 13:20:17.932022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-12-06 13:20:17.932055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244af30 with addr=10.0.0.3, port=4420 00:26:29.473 [2024-12-06 13:20:17.932068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244af30 is same with the state(6) to be set 00:26:29.473 [2024-12-06 13:20:17.932088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244af30 (9): Bad file descriptor 00:26:29.473 [2024-12-06 13:20:17.932106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:29.473 [2024-12-06 13:20:17.932115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:29.473 [2024-12-06 13:20:17.932618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:29.473 [2024-12-06 13:20:17.932645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:29.473 [2024-12-06 13:20:17.932659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:29.473 13:20:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:26:30.407 5231.00 IOPS, 20.43 MiB/s [2024-12-06T13:20:19.047Z] [2024-12-06 13:20:18.932805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-12-06 13:20:18.932880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244af30 with addr=10.0.0.3, port=4420 00:26:30.407 [2024-12-06 13:20:18.932898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244af30 is same with the state(6) to be set 00:26:30.407 [2024-12-06 13:20:18.932936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244af30 (9): Bad file descriptor 00:26:30.407 [2024-12-06 13:20:18.932957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:30.407 [2024-12-06 13:20:18.932968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:30.407 [2024-12-06 13:20:18.932986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:30.407 [2024-12-06 13:20:18.932998] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:30.407 [2024-12-06 13:20:18.933010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:31.339 3487.33 IOPS, 13.62 MiB/s [2024-12-06T13:20:19.979Z] [2024-12-06 13:20:19.933169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.339 [2024-12-06 13:20:19.933254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244af30 with addr=10.0.0.3, port=4420 00:26:31.339 [2024-12-06 13:20:19.933271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244af30 is same with the state(6) to be set 00:26:31.339 [2024-12-06 13:20:19.933298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244af30 (9): Bad file descriptor 00:26:31.339 [2024-12-06 13:20:19.933318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:31.339 [2024-12-06 13:20:19.933329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:31.339 [2024-12-06 13:20:19.933340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:31.339 [2024-12-06 13:20:19.933351] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:31.339 [2024-12-06 13:20:19.933363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:32.329 2615.50 IOPS, 10.22 MiB/s [2024-12-06T13:20:20.969Z] [2024-12-06 13:20:20.936517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-12-06 13:20:20.936594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244af30 with addr=10.0.0.3, port=4420 00:26:32.329 [2024-12-06 13:20:20.936612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244af30 is same with the state(6) to be set 00:26:32.329 [2024-12-06 13:20:20.937066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244af30 (9): Bad file descriptor 00:26:32.329 [2024-12-06 13:20:20.937501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:32.329 [2024-12-06 13:20:20.937529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:32.329 [2024-12-06 13:20:20.937542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:32.329 [2024-12-06 13:20:20.937556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:32.329 [2024-12-06 13:20:20.937568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:32.329 13:20:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:32.895 [2024-12-06 13:20:21.229904] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:32.895 13:20:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 98128 00:26:33.410 2092.40 IOPS, 8.17 MiB/s [2024-12-06T13:20:22.050Z] [2024-12-06 13:20:21.964697] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:26:35.274 2939.00 IOPS, 11.48 MiB/s [2024-12-06T13:20:24.848Z] 3831.71 IOPS, 14.97 MiB/s [2024-12-06T13:20:25.781Z] 4470.12 IOPS, 17.46 MiB/s [2024-12-06T13:20:27.153Z] 4988.56 IOPS, 19.49 MiB/s [2024-12-06T13:20:27.153Z] 5394.60 IOPS, 21.07 MiB/s 00:26:38.514 Latency(us) 00:26:38.514 [2024-12-06T13:20:27.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.514 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:38.514 Verification LBA range: start 0x0 length 0x4000 00:26:38.514 NVMe0n1 : 10.01 5400.78 21.10 3620.96 0.00 14159.90 1087.30 3035150.89 00:26:38.514 [2024-12-06T13:20:27.154Z] =================================================================================================================== 00:26:38.514 [2024-12-06T13:20:27.154Z] Total : 5400.78 21.10 3620.96 0.00 14159.90 0.00 3035150.89 00:26:38.514 { 00:26:38.514 "results": [ 00:26:38.514 { 00:26:38.514 "job": "NVMe0n1", 00:26:38.514 "core_mask": "0x4", 00:26:38.514 "workload": "verify", 00:26:38.514 "status": "finished", 00:26:38.514 "verify_range": { 00:26:38.514 "start": 0, 00:26:38.514 "length": 16384 00:26:38.514 }, 00:26:38.514 "queue_depth": 128, 00:26:38.514 "io_size": 4096, 00:26:38.514 "runtime": 10.012259, 00:26:38.514 "iops": 5400.779184797357, 00:26:38.514 "mibps": 21.096793690614675, 00:26:38.514 "io_failed": 36254, 00:26:38.514 "io_timeout": 0, 00:26:38.514 "avg_latency_us": 14159.898399529795, 00:26:38.514 "min_latency_us": 1087.3018181818181, 00:26:38.514 "max_latency_us": 3035150.8945454545 00:26:38.514 } 00:26:38.514 ], 00:26:38.514 "core_count": 1 00:26:38.514 } 00:26:38.514 13:20:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97977 00:26:38.514 13:20:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97977 ']' 00:26:38.514 13:20:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97977 00:26:38.514 13:20:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:38.514 13:20:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.514 13:20:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97977 00:26:38.514 killing process with pid 97977 00:26:38.514 Received shutdown signal, test time was about 10.000000 seconds 00:26:38.514 00:26:38.514 Latency(us) 00:26:38.514 [2024-12-06T13:20:27.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.514 [2024-12-06T13:20:27.154Z] =================================================================================================================== 00:26:38.514 [2024-12-06T13:20:27.154Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.514 13:20:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:38.514 13:20:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:38.514 13:20:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97977' 00:26:38.514 13:20:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97977 00:26:38.514 13:20:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97977 00:26:38.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:38.514 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=98249 00:26:38.514 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:26:38.514 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 98249 /var/tmp/bdevperf.sock 00:26:38.514 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98249 ']' 00:26:38.514 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:38.514 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.514 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:38.514 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.514 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.514 [2024-12-06 13:20:27.081690] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:26:38.514 [2024-12-06 13:20:27.081804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98249 ] 00:26:38.772 [2024-12-06 13:20:27.228719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.772 [2024-12-06 13:20:27.288625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.030 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.030 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:39.030 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=98263 00:26:39.030 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98249 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:26:39.030 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:26:39.288 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:39.547 NVMe0n1 00:26:39.547 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=98317 00:26:39.547 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:39.547 13:20:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:26:39.547 Running I/O for 10 seconds... 00:26:40.507 13:20:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:40.770 17720.00 IOPS, 69.22 MiB/s [2024-12-06T13:20:29.410Z] [2024-12-06 13:20:29.238175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.770 [2024-12-06 13:20:29.238843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.238992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.239000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.239008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.239015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.239023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.239031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.239039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.239046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.239054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.239061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.239069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.239077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.239085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe198f0 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.240337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.771 [2024-12-06 13:20:29.240379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.240393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.771 [2024-12-06 13:20:29.240403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.240413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.771 [2024-12-06 13:20:29.240422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.240433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.771 [2024-12-06 13:20:29.240442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.240467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd56f30 is same with the state(6) to be set 00:26:40.771 [2024-12-06 13:20:29.240538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.240553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.240573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.240583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.240594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.240604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.240615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.240624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.240635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.240644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.240655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.240825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.241072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.241086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.241098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.241107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.241119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.241130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.241141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.241150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.241161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.241170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.241182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.241191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.241547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.241572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.241585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.241596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.241608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.241617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.241628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.241637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.241648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.241657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.241668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.241677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.241688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.771 [2024-12-06 13:20:29.241697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.771 [2024-12-06 13:20:29.242075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.242975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.242985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.243135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.243265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.243279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.243427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.243655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.243667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.243678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.243687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.243698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.243824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.243838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.243847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.243858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.243867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.243993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.244143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.244252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.244265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.244277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.244287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.244298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.244307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.244318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.244327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.244338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.244347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.244358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.244367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.244733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.244746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.244758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.244768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.244779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.244788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.245038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.245059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.772 [2024-12-06 13:20:29.245072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.772 [2024-12-06 13:20:29.245081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.245092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.245100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.245112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.245122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.245133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.245143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.245154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.245163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.245289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.245412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.245430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.245440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.245685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.245699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.245711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.245721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.245732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.245742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.245753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.245762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.245773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.245782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.245901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.245914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.245925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.245934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.246179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.246190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.246201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.246210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.246222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.246231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.246477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.246499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.246511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.246522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.246533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.246542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.246553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.246562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.246820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.246841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.246853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.247000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.247145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.247290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.247535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.247561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.247574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.247583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.247595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.247606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.247617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.247754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.247885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.247897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.247909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.248168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.248185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.248195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.248206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.248216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.773 [2024-12-06 13:20:29.248227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.773 [2024-12-06 13:20:29.248473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.248488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.248497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.248510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.248519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.248531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.248769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.248791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.248802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.248814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.248823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.249086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.249100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.249111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.249122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.249134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.249143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.249358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.249379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.249393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.249402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.249413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.249562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.249707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.249974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.249997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.250103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.250123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.250133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.250147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.250156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.250434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.250447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.250476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.250485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.250496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.250505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.250516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.250801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.250814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.250824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.250835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.250843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.250854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.251003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.251122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.251133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.251144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.251274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.251290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.251554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.251697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.251821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.251844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.251854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.252071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.252091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.252104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.252113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.252125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.252134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.252145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.252376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.252397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.252407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.252419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.252428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.252438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.252547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.252568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.774 [2024-12-06 13:20:29.252578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-12-06 13:20:29.252589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.775 [2024-12-06 13:20:29.252812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-12-06 13:20:29.252838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.775 [2024-12-06 13:20:29.252848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-12-06 13:20:29.252860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.775 [2024-12-06 13:20:29.252869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-12-06 13:20:29.252881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.775 [2024-12-06 13:20:29.252890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-12-06 13:20:29.253255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.775 [2024-12-06 13:20:29.253279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-12-06 13:20:29.253292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.775 [2024-12-06 13:20:29.253302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-12-06 13:20:29.253329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:40.775 [2024-12-06 13:20:29.253340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:40.775 [2024-12-06 13:20:29.253448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88800 len:8 PRP1 0x0 PRP2 0x0 00:26:40.775 [2024-12-06 13:20:29.253601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-12-06 13:20:29.254083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd56f30 (9): Bad file descriptor 00:26:40.775 [2024-12-06 13:20:29.254507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:40.775 [2024-12-06 13:20:29.254628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.775 [2024-12-06 13:20:29.254653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd56f30 with addr=10.0.0.3, port=4420 00:26:40.775 [2024-12-06 13:20:29.254665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd56f30 is same with the state(6) to be set 00:26:40.775 [2024-12-06 13:20:29.254684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd56f30 (9): Bad file descriptor 00:26:40.775 [2024-12-06 13:20:29.254701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:40.775 [2024-12-06 13:20:29.254710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:40.775 [2024-12-06 13:20:29.254721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:40.775 [2024-12-06 13:20:29.254732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:40.775 [2024-12-06 13:20:29.254742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:40.775 13:20:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 98317 00:26:42.645 9774.00 IOPS, 38.18 MiB/s [2024-12-06T13:20:31.285Z] 6516.00 IOPS, 25.45 MiB/s [2024-12-06T13:20:31.285Z] [2024-12-06 13:20:31.255079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.645 [2024-12-06 13:20:31.255148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd56f30 with addr=10.0.0.3, port=4420 00:26:42.645 [2024-12-06 13:20:31.255166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd56f30 is same with the state(6) to be set 00:26:42.645 [2024-12-06 13:20:31.255193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd56f30 (9): Bad file descriptor 00:26:42.645 [2024-12-06 13:20:31.255214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:42.645 [2024-12-06 13:20:31.255225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:42.645 [2024-12-06 13:20:31.255237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:42.645 [2024-12-06 13:20:31.255249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:42.645 [2024-12-06 13:20:31.255261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:44.517 4887.00 IOPS, 19.09 MiB/s [2024-12-06T13:20:33.415Z] 3909.60 IOPS, 15.27 MiB/s [2024-12-06T13:20:33.415Z] [2024-12-06 13:20:33.255549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.775 [2024-12-06 13:20:33.255614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd56f30 with addr=10.0.0.3, port=4420 00:26:44.775 [2024-12-06 13:20:33.255631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd56f30 is same with the state(6) to be set 00:26:44.775 [2024-12-06 13:20:33.255658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd56f30 (9): Bad file descriptor 00:26:44.775 [2024-12-06 13:20:33.255679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:44.775 [2024-12-06 13:20:33.255689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:44.775 [2024-12-06 13:20:33.255701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:44.775 [2024-12-06 13:20:33.255713] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:44.775 [2024-12-06 13:20:33.255724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:46.642 3258.00 IOPS, 12.73 MiB/s [2024-12-06T13:20:35.282Z] 2792.57 IOPS, 10.91 MiB/s [2024-12-06T13:20:35.282Z] [2024-12-06 13:20:35.255902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:46.642 [2024-12-06 13:20:35.255954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:46.642 [2024-12-06 13:20:35.255984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:46.642 [2024-12-06 13:20:35.255994] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:26:46.642 [2024-12-06 13:20:35.256005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:47.832 2443.50 IOPS, 9.54 MiB/s 00:26:47.832 Latency(us) 00:26:47.832 [2024-12-06T13:20:36.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.832 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:26:47.832 NVMe0n1 : 8.12 2408.18 9.41 15.77 0.00 52870.38 2472.49 7046430.72 00:26:47.832 [2024-12-06T13:20:36.472Z] =================================================================================================================== 00:26:47.832 [2024-12-06T13:20:36.472Z] Total : 2408.18 9.41 15.77 0.00 52870.38 2472.49 7046430.72 00:26:47.832 { 00:26:47.832 "results": [ 00:26:47.832 { 00:26:47.832 "job": "NVMe0n1", 00:26:47.832 "core_mask": "0x4", 00:26:47.832 "workload": "randread", 00:26:47.832 "status": "finished", 00:26:47.832 "queue_depth": 128, 00:26:47.832 "io_size": 4096, 00:26:47.832 "runtime": 8.117319, 00:26:47.832 "iops": 2408.184278577693, 00:26:47.832 "mibps": 9.406969838194113, 00:26:47.832 "io_failed": 128, 00:26:47.832 "io_timeout": 0, 00:26:47.832 "avg_latency_us": 52870.38171394777, 00:26:47.832 "min_latency_us": 2472.4945454545455, 00:26:47.832 "max_latency_us": 7046430.72 00:26:47.832 } 00:26:47.832 ], 00:26:47.832 "core_count": 1 00:26:47.832 } 00:26:47.832 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:47.832 Attaching 5 probes... 00:26:47.832 1323.853970: reset bdev controller NVMe0 00:26:47.832 1323.911515: reconnect bdev controller NVMe0 00:26:47.832 3324.283173: reconnect delay bdev controller NVMe0 00:26:47.832 3324.323415: reconnect bdev controller NVMe0 00:26:47.832 5324.771463: reconnect delay bdev controller NVMe0 00:26:47.832 5324.793584: reconnect bdev controller NVMe0 00:26:47.832 7325.215744: reconnect delay bdev controller NVMe0 00:26:47.832 7325.251828: reconnect bdev controller NVMe0 00:26:47.832 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:26:47.832 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:26:47.832 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 98263 00:26:47.832 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:47.832 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 98249 00:26:47.832 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98249 ']' 00:26:47.832 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98249 00:26:47.832 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:47.832 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:47.832 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98249 00:26:47.832 killing process with pid 98249 00:26:47.832 Received shutdown signal, test time was about 8.185990 seconds 00:26:47.832 00:26:47.832 Latency(us) 00:26:47.832 [2024-12-06T13:20:36.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.832 [2024-12-06T13:20:36.473Z] =================================================================================================================== 00:26:47.833 [2024-12-06T13:20:36.473Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:47.833 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:47.833 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:47.833 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98249' 00:26:47.833 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98249 00:26:47.833 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98249 00:26:48.111 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:48.375 rmmod nvme_tcp 00:26:48.375 rmmod nvme_fabrics 00:26:48.375 rmmod nvme_keyring 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 97679 ']' 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 97679 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97679 ']' 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97679 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97679 00:26:48.375 killing process with pid 97679 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97679' 00:26:48.375 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97679 00:26:48.376 13:20:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97679 00:26:48.634 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:48.634 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:48.634 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:48.634 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:26:48.634 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:26:48.634 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:26:48.634 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:48.634 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:48.634 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:48.635 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:48.635 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:48.635 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:26:48.892 00:26:48.892 real 0m47.071s 00:26:48.892 user 2m17.921s 00:26:48.892 sys 0m4.848s 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:48.892 ************************************ 00:26:48.892 END TEST nvmf_timeout 00:26:48.892 ************************************ 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:48.892 00:26:48.892 real 5m52.942s 00:26:48.892 user 15m14.941s 00:26:48.892 sys 1m4.484s 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:48.892 13:20:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.892 ************************************ 00:26:48.892 END TEST nvmf_host 00:26:48.892 ************************************ 00:26:49.150 13:20:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:49.150 13:20:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:49.150 13:20:37 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:49.150 13:20:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:49.150 13:20:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:49.150 13:20:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.150 ************************************ 00:26:49.150 START TEST nvmf_target_core_interrupt_mode 00:26:49.150 ************************************ 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:49.150 * Looking for test storage... 00:26:49.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:49.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.150 --rc genhtml_branch_coverage=1 00:26:49.150 --rc genhtml_function_coverage=1 00:26:49.150 --rc genhtml_legend=1 00:26:49.150 --rc geninfo_all_blocks=1 00:26:49.150 --rc geninfo_unexecuted_blocks=1 00:26:49.150 00:26:49.150 ' 00:26:49.150 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:49.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.150 --rc genhtml_branch_coverage=1 00:26:49.150 --rc genhtml_function_coverage=1 00:26:49.150 --rc genhtml_legend=1 00:26:49.150 --rc geninfo_all_blocks=1 00:26:49.151 --rc geninfo_unexecuted_blocks=1 00:26:49.151 00:26:49.151 ' 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:49.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.151 --rc genhtml_branch_coverage=1 00:26:49.151 --rc genhtml_function_coverage=1 00:26:49.151 --rc genhtml_legend=1 00:26:49.151 --rc geninfo_all_blocks=1 00:26:49.151 --rc geninfo_unexecuted_blocks=1 00:26:49.151 00:26:49.151 ' 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:49.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.151 --rc genhtml_branch_coverage=1 00:26:49.151 --rc genhtml_function_coverage=1 00:26:49.151 --rc genhtml_legend=1 00:26:49.151 --rc geninfo_all_blocks=1 00:26:49.151 --rc geninfo_unexecuted_blocks=1 00:26:49.151 00:26:49.151 ' 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:49.151 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:49.410 ************************************ 00:26:49.410 START TEST nvmf_abort 00:26:49.410 ************************************ 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:49.410 * Looking for test storage... 00:26:49.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:49.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.410 --rc genhtml_branch_coverage=1 00:26:49.410 --rc genhtml_function_coverage=1 00:26:49.410 --rc genhtml_legend=1 00:26:49.410 --rc geninfo_all_blocks=1 00:26:49.410 --rc geninfo_unexecuted_blocks=1 00:26:49.410 00:26:49.410 ' 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:49.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.410 --rc genhtml_branch_coverage=1 00:26:49.410 --rc genhtml_function_coverage=1 00:26:49.410 --rc genhtml_legend=1 00:26:49.410 --rc geninfo_all_blocks=1 00:26:49.410 --rc geninfo_unexecuted_blocks=1 00:26:49.410 00:26:49.410 ' 00:26:49.410 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:49.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.411 --rc genhtml_branch_coverage=1 00:26:49.411 --rc genhtml_function_coverage=1 00:26:49.411 --rc genhtml_legend=1 00:26:49.411 --rc geninfo_all_blocks=1 00:26:49.411 --rc geninfo_unexecuted_blocks=1 00:26:49.411 00:26:49.411 ' 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:49.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.411 --rc genhtml_branch_coverage=1 00:26:49.411 --rc genhtml_function_coverage=1 00:26:49.411 --rc genhtml_legend=1 00:26:49.411 --rc geninfo_all_blocks=1 00:26:49.411 --rc geninfo_unexecuted_blocks=1 00:26:49.411 00:26:49.411 ' 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:49.411 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:49.411 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:49.411 Cannot find device "nvmf_init_br" 00:26:49.412 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:26:49.412 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:49.412 Cannot find device "nvmf_init_br2" 00:26:49.412 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:26:49.412 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:49.412 Cannot find device "nvmf_tgt_br" 00:26:49.412 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:26:49.412 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:49.670 Cannot find device "nvmf_tgt_br2" 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:49.670 Cannot find device "nvmf_init_br" 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:49.670 Cannot find device "nvmf_init_br2" 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:49.670 Cannot find device "nvmf_tgt_br" 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:49.670 Cannot find device "nvmf_tgt_br2" 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:49.670 Cannot find device "nvmf_br" 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:49.670 Cannot find device "nvmf_init_if" 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:49.670 Cannot find device "nvmf_init_if2" 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:49.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:49.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:49.670 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:49.928 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:49.928 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:49.928 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:49.928 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:49.928 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:49.928 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:49.929 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:49.929 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:26:49.929 00:26:49.929 --- 10.0.0.3 ping statistics --- 00:26:49.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.929 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:49.929 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:49.929 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:26:49.929 00:26:49.929 --- 10.0.0.4 ping statistics --- 00:26:49.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.929 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:49.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:26:49.929 00:26:49.929 --- 10.0.0.1 ping statistics --- 00:26:49.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.929 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:49.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:26:49.929 00:26:49.929 --- 10.0.0.2 ping statistics --- 00:26:49.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.929 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=98734 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 98734 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 98734 ']' 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.929 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:49.929 [2024-12-06 13:20:38.442604] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:49.929 [2024-12-06 13:20:38.443621] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:26:49.929 [2024-12-06 13:20:38.443678] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.187 [2024-12-06 13:20:38.588223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:50.187 [2024-12-06 13:20:38.655804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.187 [2024-12-06 13:20:38.655863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.187 [2024-12-06 13:20:38.655875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.187 [2024-12-06 13:20:38.655883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.187 [2024-12-06 13:20:38.655890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.187 [2024-12-06 13:20:38.657132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.187 [2024-12-06 13:20:38.657279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.187 [2024-12-06 13:20:38.657286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.187 [2024-12-06 13:20:38.757782] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:50.187 [2024-12-06 13:20:38.757872] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:50.187 [2024-12-06 13:20:38.758469] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:50.187 [2024-12-06 13:20:38.758558] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.121 [2024-12-06 13:20:39.538300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.121 Malloc0 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.121 Delay0 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.121 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.122 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:51.122 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.122 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.122 [2024-12-06 13:20:39.618328] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:51.122 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.122 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:51.122 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.122 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.122 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.122 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:51.380 [2024-12-06 13:20:39.793641] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:53.296 Initializing NVMe Controllers 00:26:53.296 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:26:53.296 controller IO queue size 128 less than required 00:26:53.296 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:53.296 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:53.296 Initialization complete. Launching workers. 00:26:53.296 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27588 00:26:53.296 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27645, failed to submit 66 00:26:53.296 success 27588, unsuccessful 57, failed 0 00:26:53.296 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:53.296 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.296 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:53.296 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.296 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:53.296 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:53.296 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.296 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:53.296 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.296 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:53.296 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.296 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.296 rmmod nvme_tcp 00:26:53.296 rmmod nvme_fabrics 00:26:53.296 rmmod nvme_keyring 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 98734 ']' 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 98734 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 98734 ']' 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 98734 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98734 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:53.556 killing process with pid 98734 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98734' 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 98734 00:26:53.556 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 98734 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:26:53.818 00:26:53.818 real 0m4.649s 00:26:53.818 user 0m9.096s 00:26:53.818 sys 0m1.587s 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:53.818 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:53.818 ************************************ 00:26:53.818 END TEST nvmf_abort 00:26:53.818 ************************************ 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:54.093 ************************************ 00:26:54.093 START TEST nvmf_ns_hotplug_stress 00:26:54.093 ************************************ 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:54.093 * Looking for test storage... 00:26:54.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:54.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.093 --rc genhtml_branch_coverage=1 00:26:54.093 --rc genhtml_function_coverage=1 00:26:54.093 --rc genhtml_legend=1 00:26:54.093 --rc geninfo_all_blocks=1 00:26:54.093 --rc geninfo_unexecuted_blocks=1 00:26:54.093 00:26:54.093 ' 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:54.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.093 --rc genhtml_branch_coverage=1 00:26:54.093 --rc genhtml_function_coverage=1 00:26:54.093 --rc genhtml_legend=1 00:26:54.093 --rc geninfo_all_blocks=1 00:26:54.093 --rc geninfo_unexecuted_blocks=1 00:26:54.093 00:26:54.093 ' 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:54.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.093 --rc genhtml_branch_coverage=1 00:26:54.093 --rc genhtml_function_coverage=1 00:26:54.093 --rc genhtml_legend=1 00:26:54.093 --rc geninfo_all_blocks=1 00:26:54.093 --rc geninfo_unexecuted_blocks=1 00:26:54.093 00:26:54.093 ' 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:54.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.093 --rc genhtml_branch_coverage=1 00:26:54.093 --rc genhtml_function_coverage=1 00:26:54.093 --rc genhtml_legend=1 00:26:54.093 --rc geninfo_all_blocks=1 00:26:54.093 --rc geninfo_unexecuted_blocks=1 00:26:54.093 00:26:54.093 ' 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.093 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:54.094 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:54.353 Cannot find device "nvmf_init_br" 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:54.353 Cannot find device "nvmf_init_br2" 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:54.353 Cannot find device "nvmf_tgt_br" 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:54.353 Cannot find device "nvmf_tgt_br2" 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:54.353 Cannot find device "nvmf_init_br" 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:54.353 Cannot find device "nvmf_init_br2" 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:54.353 Cannot find device "nvmf_tgt_br" 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:54.353 Cannot find device "nvmf_tgt_br2" 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:54.353 Cannot find device "nvmf_br" 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:54.353 Cannot find device "nvmf_init_if" 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:54.353 Cannot find device "nvmf_init_if2" 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:54.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:54.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:54.353 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:54.613 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:54.613 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:54.613 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:54.613 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:54.613 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:26:54.613 00:26:54.613 --- 10.0.0.3 ping statistics --- 00:26:54.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.613 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:54.613 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:54.613 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:26:54.613 00:26:54.613 --- 10.0.0.4 ping statistics --- 00:26:54.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.613 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:54.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:26:54.613 00:26:54.613 --- 10.0.0.1 ping statistics --- 00:26:54.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.613 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:54.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:26:54.613 00:26:54.613 --- 10.0.0.2 ping statistics --- 00:26:54.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.613 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=99047 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 99047 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 99047 ']' 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.613 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.614 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.614 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:54.873 [2024-12-06 13:20:43.252585] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:54.873 [2024-12-06 13:20:43.253906] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:26:54.873 [2024-12-06 13:20:43.253984] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.873 [2024-12-06 13:20:43.408176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:54.873 [2024-12-06 13:20:43.483900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.873 [2024-12-06 13:20:43.483960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.873 [2024-12-06 13:20:43.483971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.873 [2024-12-06 13:20:43.483980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.873 [2024-12-06 13:20:43.483987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.873 [2024-12-06 13:20:43.485272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:54.873 [2024-12-06 13:20:43.485353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:54.873 [2024-12-06 13:20:43.485359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.132 [2024-12-06 13:20:43.585417] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:55.132 [2024-12-06 13:20:43.585748] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:55.132 [2024-12-06 13:20:43.585848] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:55.132 [2024-12-06 13:20:43.586840] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:55.701 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.701 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:26:55.701 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:55.701 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:55.701 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:55.960 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.960 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:55.960 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:56.219 [2024-12-06 13:20:44.681416] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.219 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:56.479 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:56.737 [2024-12-06 13:20:45.326032] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:56.737 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:57.302 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:57.560 Malloc0 00:26:57.560 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:57.817 Delay0 00:26:57.818 13:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:58.076 13:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:58.335 NULL1 00:26:58.335 13:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:58.594 13:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:58.594 13:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=99184 00:26:58.594 13:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:26:58.594 13:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:59.970 Read completed with error (sct=0, sc=11) 00:26:59.970 13:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:59.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:59.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:59.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:59.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:59.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:59.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:59.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:59.970 13:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:59.970 13:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:00.537 true 00:27:00.537 13:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:00.537 13:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:01.106 13:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:01.364 13:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:01.364 13:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:01.622 true 00:27:01.622 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:01.622 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:01.880 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:02.139 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:02.139 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:02.397 true 00:27:02.397 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:02.397 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:02.656 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:02.915 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:02.915 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:03.174 true 00:27:03.174 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:03.174 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:04.109 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:04.368 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:04.368 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:04.626 true 00:27:04.626 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:04.626 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:04.884 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:05.142 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:05.142 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:05.401 true 00:27:05.401 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:05.401 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:05.660 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:05.919 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:05.919 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:06.178 true 00:27:06.178 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:06.178 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.112 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:07.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:07.371 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:07.371 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:07.629 true 00:27:07.629 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:07.629 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.887 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.145 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:08.145 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:08.403 true 00:27:08.404 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:08.404 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.662 13:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.919 13:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:08.919 13:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:09.177 true 00:27:09.177 13:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:09.177 13:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:10.110 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:10.368 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:10.368 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:10.626 true 00:27:10.626 13:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:10.626 13:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:10.884 13:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:11.143 13:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:11.143 13:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:11.402 true 00:27:11.402 13:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:11.402 13:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:11.660 13:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:11.918 13:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:11.918 13:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:12.176 true 00:27:12.176 13:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:12.176 13:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:13.110 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:13.369 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:13.369 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:13.628 true 00:27:13.628 13:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:13.628 13:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:13.886 13:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:14.453 13:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:14.453 13:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:14.711 true 00:27:14.711 13:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:14.711 13:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:14.970 13:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:15.229 13:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:15.229 13:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:15.487 true 00:27:15.487 13:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:15.487 13:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:16.055 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:16.314 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:16.314 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:16.573 true 00:27:16.830 13:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:16.830 13:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.089 13:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:17.089 13:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:17.089 13:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:17.347 true 00:27:17.606 13:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:17.606 13:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.865 13:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:17.865 13:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:17.865 13:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:18.125 true 00:27:18.125 13:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:18.125 13:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:19.061 13:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:19.628 13:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:19.628 13:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:19.628 true 00:27:19.628 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:19.628 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:19.886 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:20.452 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:20.452 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:20.452 true 00:27:20.452 13:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:20.452 13:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:20.711 13:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:20.970 13:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:20.970 13:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:21.229 true 00:27:21.229 13:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:21.229 13:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.194 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:22.453 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:22.453 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:22.712 true 00:27:22.712 13:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:22.712 13:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.970 13:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:23.229 13:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:23.229 13:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:23.486 true 00:27:23.486 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:23.486 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:23.744 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:24.003 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:24.003 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:24.261 true 00:27:24.261 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:24.261 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:25.194 13:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:25.450 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:25.450 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:25.708 true 00:27:25.708 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:25.708 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:25.965 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:26.222 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:26.222 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:26.478 true 00:27:26.478 13:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:26.478 13:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:26.790 13:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:27.047 13:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:27.047 13:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:27.304 true 00:27:27.304 13:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:27.304 13:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:28.237 13:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:28.495 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:27:28.495 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:27:28.753 Initializing NVMe Controllers 00:27:28.753 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:27:28.753 Controller IO queue size 128, less than required. 00:27:28.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:28.753 Controller IO queue size 128, less than required. 00:27:28.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:28.753 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:28.753 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:28.753 Initialization complete. Launching workers. 00:27:28.753 ======================================================== 00:27:28.753 Latency(us) 00:27:28.753 Device Information : IOPS MiB/s Average min max 00:27:28.753 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 344.48 0.17 149321.17 3253.10 1104098.71 00:27:28.753 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8285.08 4.05 15449.12 2701.24 556645.96 00:27:28.753 ======================================================== 00:27:28.753 Total : 8629.56 4.21 20793.15 2701.24 1104098.71 00:27:28.753 00:27:29.012 true 00:27:29.012 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99184 00:27:29.012 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (99184) - No such process 00:27:29.012 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 99184 00:27:29.012 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:29.270 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:29.529 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:29.529 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:29.529 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:29.530 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:29.530 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:29.788 null0 00:27:29.788 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:29.788 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:29.788 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:30.047 null1 00:27:30.047 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:30.047 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:30.047 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:30.306 null2 00:27:30.306 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:30.306 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:30.306 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:30.565 null3 00:27:30.565 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:30.565 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:30.565 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:30.824 null4 00:27:30.824 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:30.824 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:30.824 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:31.108 null5 00:27:31.108 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:31.108 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:31.108 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:31.366 null6 00:27:31.366 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:31.366 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:31.366 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:31.625 null7 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 100191 100193 100195 100196 100198 100202 100205 100208 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.625 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:31.883 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:31.883 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:32.141 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:32.141 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:32.141 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:32.141 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:32.141 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:32.141 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.141 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.141 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.141 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:32.141 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.141 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.141 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.399 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.400 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:32.400 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.400 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.400 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:32.657 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:32.657 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:32.658 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:32.658 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:32.658 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:32.658 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:32.658 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:32.658 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.916 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:33.174 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:33.174 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:33.174 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:33.174 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:33.174 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:33.174 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:33.174 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:33.432 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:33.432 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.432 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.432 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:33.432 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.432 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.432 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:33.432 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.432 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.432 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:33.432 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.432 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.432 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:33.432 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.432 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.432 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:33.432 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.432 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.432 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:33.690 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.690 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.690 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:33.690 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:33.690 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.690 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.690 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:33.690 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:33.690 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:33.691 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:33.691 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:34.208 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:34.467 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:34.467 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:34.467 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:34.467 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.725 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:34.984 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:34.984 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:34.984 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.984 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.984 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:34.984 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:34.984 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:34.984 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:34.984 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.242 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:35.500 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.500 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.500 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:35.500 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.500 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.500 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:35.500 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:35.500 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:35.500 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.500 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.500 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:35.500 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:35.758 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:35.758 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:35.758 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:35.758 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.758 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.758 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:35.758 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.758 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.758 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:35.758 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:36.015 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.015 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.015 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:36.015 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.015 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.015 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:36.015 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:36.015 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.015 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.015 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:36.015 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.016 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.016 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:36.016 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:36.016 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:36.274 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.274 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.274 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:36.274 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.274 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.274 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:36.274 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:36.274 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:36.274 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:36.274 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:36.274 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.274 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.274 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:36.531 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.531 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.532 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:36.532 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.532 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.532 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:36.532 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:36.532 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:36.532 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.532 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.532 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:36.532 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.532 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.532 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:36.532 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.532 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.532 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:36.532 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:36.789 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.789 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.789 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:36.789 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.789 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.789 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:36.789 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:36.789 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:36.789 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:36.790 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:37.047 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:37.047 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.047 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.047 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:37.047 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.047 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.047 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.047 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:37.047 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.047 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.047 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:37.047 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:37.304 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.305 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:37.563 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.563 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.563 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:37.563 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:37.563 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:37.563 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.563 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.563 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.563 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.563 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.563 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:37.821 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.821 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.821 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.821 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.821 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.821 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.821 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.821 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:38.079 rmmod nvme_tcp 00:27:38.079 rmmod nvme_fabrics 00:27:38.079 rmmod nvme_keyring 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 99047 ']' 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 99047 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 99047 ']' 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 99047 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.079 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99047 00:27:38.080 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:38.080 killing process with pid 99047 00:27:38.080 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:38.080 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99047' 00:27:38.080 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 99047 00:27:38.080 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 99047 00:27:38.338 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:38.338 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:38.338 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:38.338 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:38.338 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:27:38.338 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:38.338 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:27:38.338 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:38.338 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:38.338 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:38.338 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:38.338 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:38.339 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:38.339 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:38.339 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:38.339 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:38.339 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:38.339 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:38.339 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:38.339 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:38.339 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:38.339 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:38.598 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:38.598 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.598 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.598 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:27:38.598 00:27:38.598 real 0m44.508s 00:27:38.598 user 3m16.710s 00:27:38.598 sys 0m18.287s 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:38.598 ************************************ 00:27:38.598 END TEST nvmf_ns_hotplug_stress 00:27:38.598 ************************************ 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:38.598 ************************************ 00:27:38.598 START TEST nvmf_delete_subsystem 00:27:38.598 ************************************ 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:38.598 * Looking for test storage... 00:27:38.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:38.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.598 --rc genhtml_branch_coverage=1 00:27:38.598 --rc genhtml_function_coverage=1 00:27:38.598 --rc genhtml_legend=1 00:27:38.598 --rc geninfo_all_blocks=1 00:27:38.598 --rc geninfo_unexecuted_blocks=1 00:27:38.598 00:27:38.598 ' 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:38.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.598 --rc genhtml_branch_coverage=1 00:27:38.598 --rc genhtml_function_coverage=1 00:27:38.598 --rc genhtml_legend=1 00:27:38.598 --rc geninfo_all_blocks=1 00:27:38.598 --rc geninfo_unexecuted_blocks=1 00:27:38.598 00:27:38.598 ' 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:38.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.598 --rc genhtml_branch_coverage=1 00:27:38.598 --rc genhtml_function_coverage=1 00:27:38.598 --rc genhtml_legend=1 00:27:38.598 --rc geninfo_all_blocks=1 00:27:38.598 --rc geninfo_unexecuted_blocks=1 00:27:38.598 00:27:38.598 ' 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:38.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.598 --rc genhtml_branch_coverage=1 00:27:38.598 --rc genhtml_function_coverage=1 00:27:38.598 --rc genhtml_legend=1 00:27:38.598 --rc geninfo_all_blocks=1 00:27:38.598 --rc geninfo_unexecuted_blocks=1 00:27:38.598 00:27:38.598 ' 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:38.598 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:38.858 Cannot find device "nvmf_init_br" 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:27:38.858 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:38.858 Cannot find device "nvmf_init_br2" 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:38.859 Cannot find device "nvmf_tgt_br" 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:38.859 Cannot find device "nvmf_tgt_br2" 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:38.859 Cannot find device "nvmf_init_br" 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:38.859 Cannot find device "nvmf_init_br2" 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:38.859 Cannot find device "nvmf_tgt_br" 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:38.859 Cannot find device "nvmf_tgt_br2" 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:38.859 Cannot find device "nvmf_br" 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:38.859 Cannot find device "nvmf_init_if" 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:38.859 Cannot find device "nvmf_init_if2" 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:38.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:38.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:38.859 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:39.116 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:39.117 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:39.117 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:27:39.117 00:27:39.117 --- 10.0.0.3 ping statistics --- 00:27:39.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.117 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:39.117 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:39.117 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:27:39.117 00:27:39.117 --- 10.0.0.4 ping statistics --- 00:27:39.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.117 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:39.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:27:39.117 00:27:39.117 --- 10.0.0.1 ping statistics --- 00:27:39.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.117 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:39.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:27:39.117 00:27:39.117 --- 10.0.0.2 ping statistics --- 00:27:39.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.117 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=101573 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 101573 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 101573 ']' 00:27:39.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:39.117 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:39.375 [2024-12-06 13:21:27.767615] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:39.375 [2024-12-06 13:21:27.769366] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:27:39.375 [2024-12-06 13:21:27.769444] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.375 [2024-12-06 13:21:27.923988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:39.375 [2024-12-06 13:21:27.997158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.375 [2024-12-06 13:21:27.997408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.375 [2024-12-06 13:21:27.997625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.375 [2024-12-06 13:21:27.997808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.375 [2024-12-06 13:21:27.997977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.375 [2024-12-06 13:21:27.999390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.375 [2024-12-06 13:21:27.999408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.634 [2024-12-06 13:21:28.100740] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:39.634 [2024-12-06 13:21:28.100930] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:39.634 [2024-12-06 13:21:28.101325] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:40.200 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.200 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:27:40.200 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.200 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:40.200 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.200 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.200 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:40.200 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.200 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.200 [2024-12-06 13:21:28.832625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.459 [2024-12-06 13:21:28.853015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.459 NULL1 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.459 Delay0 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=101630 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:40.459 13:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:40.459 [2024-12-06 13:21:29.062491] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:42.441 13:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:42.441 13:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.441 13:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 [2024-12-06 13:21:31.099946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23797e0 is same with the state(6) to be set 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 starting I/O failed: -6 00:27:42.701 Read completed with error (sct=0, sc=8) 00:27:42.701 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 starting I/O failed: -6 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 starting I/O failed: -6 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 starting I/O failed: -6 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 starting I/O failed: -6 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 [2024-12-06 13:21:31.100653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4878000c60 is same with the state(6) to be set 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Write completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:42.702 Read completed with error (sct=0, sc=8) 00:27:43.634 [2024-12-06 13:21:32.078041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236daa0 is same with the state(6) to be set 00:27:43.634 Write completed with error (sct=0, sc=8) 00:27:43.634 Write completed with error (sct=0, sc=8) 00:27:43.634 Read completed with error (sct=0, sc=8) 00:27:43.634 Read completed with error (sct=0, sc=8) 00:27:43.634 Write completed with error (sct=0, sc=8) 00:27:43.634 Read completed with error (sct=0, sc=8) 00:27:43.634 Write completed with error (sct=0, sc=8) 00:27:43.634 Write completed with error (sct=0, sc=8) 00:27:43.634 Read completed with error (sct=0, sc=8) 00:27:43.634 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 [2024-12-06 13:21:32.099841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237bea0 is same with the state(6) to be set 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 [2024-12-06 13:21:32.100105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f487800d040 is same with the state(6) to be set 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 [2024-12-06 13:21:32.100291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f487800d820 is same with the state(6) to be set 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Read completed with error (sct=0, sc=8) 00:27:43.635 Write completed with error (sct=0, sc=8) 00:27:43.635 [2024-12-06 13:21:32.101003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378a50 is same with the state(6) to be set 00:27:43.635 Initializing NVMe Controllers 00:27:43.635 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:27:43.635 Controller IO queue size 128, less than required. 00:27:43.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:43.635 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:43.635 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:43.635 Initialization complete. Launching workers. 00:27:43.635 ======================================================== 00:27:43.635 Latency(us) 00:27:43.635 Device Information : IOPS MiB/s Average min max 00:27:43.635 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.57 0.09 882583.78 1663.53 1013380.64 00:27:43.635 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.24 0.08 984504.68 499.83 2002791.07 00:27:43.635 ======================================================== 00:27:43.635 Total : 331.81 0.16 930268.75 499.83 2002791.07 00:27:43.635 00:27:43.635 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.635 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:43.635 [2024-12-06 13:21:32.102269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236daa0 (9): Bad file descriptor 00:27:43.635 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:43.635 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101630 00:27:43.635 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101630 00:27:44.199 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (101630) - No such process 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 101630 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 101630 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 101630 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:44.199 [2024-12-06 13:21:32.625075] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=101672 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101672 00:27:44.199 13:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:44.199 [2024-12-06 13:21:32.806384] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:44.766 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:44.766 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101672 00:27:44.766 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:45.023 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:45.023 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101672 00:27:45.023 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:45.588 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:45.588 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101672 00:27:45.588 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:46.154 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:46.154 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101672 00:27:46.154 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:46.719 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:46.719 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101672 00:27:46.719 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:47.284 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:47.284 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101672 00:27:47.284 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:47.284 Initializing NVMe Controllers 00:27:47.284 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:27:47.284 Controller IO queue size 128, less than required. 00:27:47.284 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:47.284 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:47.284 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:47.284 Initialization complete. Launching workers. 00:27:47.284 ======================================================== 00:27:47.284 Latency(us) 00:27:47.284 Device Information : IOPS MiB/s Average min max 00:27:47.284 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003619.20 1000153.33 1011453.99 00:27:47.284 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005689.08 1000259.70 1012592.80 00:27:47.284 ======================================================== 00:27:47.284 Total : 256.00 0.12 1004654.14 1000153.33 1012592.80 00:27:47.284 00:27:47.543 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:47.543 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101672 00:27:47.543 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (101672) - No such process 00:27:47.543 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 101672 00:27:47.543 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:47.543 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:47.543 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:47.543 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:47.801 rmmod nvme_tcp 00:27:47.801 rmmod nvme_fabrics 00:27:47.801 rmmod nvme_keyring 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 101573 ']' 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 101573 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 101573 ']' 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 101573 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101573 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:47.801 killing process with pid 101573 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101573' 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 101573 00:27:47.801 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 101573 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:48.060 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:27:48.319 00:27:48.319 real 0m9.785s 00:27:48.319 user 0m24.407s 00:27:48.319 sys 0m2.470s 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:48.319 ************************************ 00:27:48.319 END TEST nvmf_delete_subsystem 00:27:48.319 ************************************ 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:48.319 ************************************ 00:27:48.319 START TEST nvmf_host_management 00:27:48.319 ************************************ 00:27:48.319 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:48.579 * Looking for test storage... 00:27:48.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:48.579 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:48.579 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:48.579 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:48.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.579 --rc genhtml_branch_coverage=1 00:27:48.579 --rc genhtml_function_coverage=1 00:27:48.579 --rc genhtml_legend=1 00:27:48.579 --rc geninfo_all_blocks=1 00:27:48.579 --rc geninfo_unexecuted_blocks=1 00:27:48.579 00:27:48.579 ' 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:48.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.579 --rc genhtml_branch_coverage=1 00:27:48.579 --rc genhtml_function_coverage=1 00:27:48.579 --rc genhtml_legend=1 00:27:48.579 --rc geninfo_all_blocks=1 00:27:48.579 --rc geninfo_unexecuted_blocks=1 00:27:48.579 00:27:48.579 ' 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:48.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.579 --rc genhtml_branch_coverage=1 00:27:48.579 --rc genhtml_function_coverage=1 00:27:48.579 --rc genhtml_legend=1 00:27:48.579 --rc geninfo_all_blocks=1 00:27:48.579 --rc geninfo_unexecuted_blocks=1 00:27:48.579 00:27:48.579 ' 00:27:48.579 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:48.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.579 --rc genhtml_branch_coverage=1 00:27:48.579 --rc genhtml_function_coverage=1 00:27:48.579 --rc genhtml_legend=1 00:27:48.579 --rc geninfo_all_blocks=1 00:27:48.579 --rc geninfo_unexecuted_blocks=1 00:27:48.579 00:27:48.579 ' 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:48.580 Cannot find device "nvmf_init_br" 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:48.580 Cannot find device "nvmf_init_br2" 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:48.580 Cannot find device "nvmf_tgt_br" 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:48.580 Cannot find device "nvmf_tgt_br2" 00:27:48.580 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:27:48.581 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:48.581 Cannot find device "nvmf_init_br" 00:27:48.581 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:27:48.581 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:48.581 Cannot find device "nvmf_init_br2" 00:27:48.581 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:27:48.581 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:48.581 Cannot find device "nvmf_tgt_br" 00:27:48.581 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:27:48.581 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:48.581 Cannot find device "nvmf_tgt_br2" 00:27:48.581 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:27:48.581 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:48.840 Cannot find device "nvmf_br" 00:27:48.840 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:27:48.840 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:48.840 Cannot find device "nvmf_init_if" 00:27:48.840 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:27:48.840 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:48.840 Cannot find device "nvmf_init_if2" 00:27:48.840 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:27:48.840 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:48.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:48.840 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:27:48.840 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:48.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:48.840 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:27:48.840 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:48.840 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:48.840 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:48.841 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:48.841 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:27:48.841 00:27:48.841 --- 10.0.0.3 ping statistics --- 00:27:48.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.841 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:48.841 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:48.841 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:27:48.841 00:27:48.841 --- 10.0.0.4 ping statistics --- 00:27:48.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.841 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:48.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:27:48.841 00:27:48.841 --- 10.0.0.1 ping statistics --- 00:27:48.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.841 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:48.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:27:48.841 00:27:48.841 --- 10.0.0.2 ping statistics --- 00:27:48.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.841 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:48.841 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:49.099 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:49.099 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:49.099 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:49.099 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:49.099 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.099 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.099 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=101966 00:27:49.099 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:49.099 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 101966 00:27:49.099 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 101966 ']' 00:27:49.099 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.099 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.100 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.100 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.100 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.100 [2024-12-06 13:21:37.544909] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:49.100 [2024-12-06 13:21:37.546180] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:27:49.100 [2024-12-06 13:21:37.546256] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.100 [2024-12-06 13:21:37.697581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:49.358 [2024-12-06 13:21:37.755054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.358 [2024-12-06 13:21:37.755317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.358 [2024-12-06 13:21:37.755448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.358 [2024-12-06 13:21:37.755659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.358 [2024-12-06 13:21:37.755695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.358 [2024-12-06 13:21:37.756913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.358 [2024-12-06 13:21:37.757050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:49.358 [2024-12-06 13:21:37.757476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.358 [2024-12-06 13:21:37.757484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:49.358 [2024-12-06 13:21:37.849115] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:49.358 [2024-12-06 13:21:37.849802] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:49.358 [2024-12-06 13:21:37.850045] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:49.358 [2024-12-06 13:21:37.850171] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:49.358 [2024-12-06 13:21:37.850393] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.358 [2024-12-06 13:21:37.938485] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.358 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.617 Malloc0 00:27:49.617 [2024-12-06 13:21:38.026368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=102029 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 102029 /var/tmp/bdevperf.sock 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 102029 ']' 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:49.617 { 00:27:49.617 "params": { 00:27:49.617 "name": "Nvme$subsystem", 00:27:49.617 "trtype": "$TEST_TRANSPORT", 00:27:49.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.617 "adrfam": "ipv4", 00:27:49.617 "trsvcid": "$NVMF_PORT", 00:27:49.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.617 "hdgst": ${hdgst:-false}, 00:27:49.617 "ddgst": ${ddgst:-false} 00:27:49.617 }, 00:27:49.617 "method": "bdev_nvme_attach_controller" 00:27:49.617 } 00:27:49.617 EOF 00:27:49.617 )") 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:49.617 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:49.617 "params": { 00:27:49.617 "name": "Nvme0", 00:27:49.617 "trtype": "tcp", 00:27:49.617 "traddr": "10.0.0.3", 00:27:49.617 "adrfam": "ipv4", 00:27:49.617 "trsvcid": "4420", 00:27:49.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.617 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:49.617 "hdgst": false, 00:27:49.617 "ddgst": false 00:27:49.617 }, 00:27:49.617 "method": "bdev_nvme_attach_controller" 00:27:49.617 }' 00:27:49.617 [2024-12-06 13:21:38.135784] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:27:49.617 [2024-12-06 13:21:38.135873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102029 ] 00:27:49.876 [2024-12-06 13:21:38.290024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.876 [2024-12-06 13:21:38.374662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.135 Running I/O for 10 seconds... 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.703 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:50.703 [2024-12-06 13:21:39.282975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.703 [2024-12-06 13:21:39.283061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.703 [2024-12-06 13:21:39.283102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.703 [2024-12-06 13:21:39.283114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.703 [2024-12-06 13:21:39.283126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.703 [2024-12-06 13:21:39.283153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.703 [2024-12-06 13:21:39.283181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.703 [2024-12-06 13:21:39.283191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.703 [2024-12-06 13:21:39.283203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.703 [2024-12-06 13:21:39.283215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.703 [2024-12-06 13:21:39.283227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.703 [2024-12-06 13:21:39.283237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.703 [2024-12-06 13:21:39.283248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.703 [2024-12-06 13:21:39.283259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.703 [2024-12-06 13:21:39.283270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.283987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.283999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.284009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.284021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.284031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.284043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.284052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.284064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.284073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.284085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.284095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.284109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.284119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.284131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.284141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.284153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.284162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.284174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.284184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.704 [2024-12-06 13:21:39.284195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.704 [2024-12-06 13:21:39.284205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.705 [2024-12-06 13:21:39.284662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.705 [2024-12-06 13:21:39.284920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.705 [2024-12-06 13:21:39.284945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.705 [2024-12-06 13:21:39.284966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.284988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.705 [2024-12-06 13:21:39.284999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.285009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196d130 is same with the state(6) to be set 00:27:50.705 [2024-12-06 13:21:39.286214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:50.705 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.705 task offset: 129024 on job bdev=Nvme0n1 fails 00:27:50.705 00:27:50.705 Latency(us) 00:27:50.705 [2024-12-06T13:21:39.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.705 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.705 Job: Nvme0n1 ended in about 0.70 seconds with error 00:27:50.705 Verification LBA range: start 0x0 length 0x400 00:27:50.705 Nvme0n1 : 0.70 1379.00 86.19 91.93 0.00 42438.15 3157.64 39321.60 00:27:50.705 [2024-12-06T13:21:39.345Z] =================================================================================================================== 00:27:50.705 [2024-12-06T13:21:39.345Z] Total : 1379.00 86.19 91.93 0.00 42438.15 3157.64 39321.60 00:27:50.705 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:50.705 [2024-12-06 13:21:39.288397] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:50.705 [2024-12-06 13:21:39.288423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196d130 (9): Bad file descriptor 00:27:50.705 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.705 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:50.705 [2024-12-06 13:21:39.289520] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:27:50.705 [2024-12-06 13:21:39.289612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:50.705 [2024-12-06 13:21:39.289636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.705 [2024-12-06 13:21:39.289654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:27:50.705 [2024-12-06 13:21:39.289666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:27:50.705 [2024-12-06 13:21:39.289676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.705 [2024-12-06 13:21:39.289686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x196d130 00:27:50.705 [2024-12-06 13:21:39.289726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196d130 (9): Bad file descriptor 00:27:50.705 [2024-12-06 13:21:39.289745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:50.705 [2024-12-06 13:21:39.289755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:50.705 [2024-12-06 13:21:39.289768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:50.705 [2024-12-06 13:21:39.289781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:50.705 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.705 13:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:52.083 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 102029 00:27:52.083 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (102029) - No such process 00:27:52.083 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:52.083 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:52.083 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:52.083 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:52.083 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:52.083 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:52.083 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:52.083 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:52.083 { 00:27:52.083 "params": { 00:27:52.083 "name": "Nvme$subsystem", 00:27:52.083 "trtype": "$TEST_TRANSPORT", 00:27:52.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.083 "adrfam": "ipv4", 00:27:52.083 "trsvcid": "$NVMF_PORT", 00:27:52.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.083 "hdgst": ${hdgst:-false}, 00:27:52.083 "ddgst": ${ddgst:-false} 00:27:52.083 }, 00:27:52.083 "method": "bdev_nvme_attach_controller" 00:27:52.083 } 00:27:52.083 EOF 00:27:52.083 )") 00:27:52.083 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:52.083 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:52.083 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:52.083 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:52.083 "params": { 00:27:52.083 "name": "Nvme0", 00:27:52.083 "trtype": "tcp", 00:27:52.083 "traddr": "10.0.0.3", 00:27:52.083 "adrfam": "ipv4", 00:27:52.083 "trsvcid": "4420", 00:27:52.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:52.083 "hdgst": false, 00:27:52.083 "ddgst": false 00:27:52.083 }, 00:27:52.083 "method": "bdev_nvme_attach_controller" 00:27:52.083 }' 00:27:52.083 [2024-12-06 13:21:40.374167] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:27:52.083 [2024-12-06 13:21:40.374276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102075 ] 00:27:52.083 [2024-12-06 13:21:40.527559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.083 [2024-12-06 13:21:40.606766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.341 Running I/O for 1 seconds... 00:27:53.276 1472.00 IOPS, 92.00 MiB/s 00:27:53.276 Latency(us) 00:27:53.276 [2024-12-06T13:21:41.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.276 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.276 Verification LBA range: start 0x0 length 0x400 00:27:53.276 Nvme0n1 : 1.03 1493.99 93.37 0.00 0.00 41972.61 6672.76 37891.72 00:27:53.276 [2024-12-06T13:21:41.916Z] =================================================================================================================== 00:27:53.276 [2024-12-06T13:21:41.916Z] Total : 1493.99 93.37 0.00 0.00 41972.61 6672.76 37891.72 00:27:53.534 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:53.535 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:53.535 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:27:53.535 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:27:53.535 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:53.535 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:53.535 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:53.793 rmmod nvme_tcp 00:27:53.793 rmmod nvme_fabrics 00:27:53.793 rmmod nvme_keyring 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 101966 ']' 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 101966 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 101966 ']' 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 101966 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101966 00:27:53.793 killing process with pid 101966 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101966' 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 101966 00:27:53.793 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 101966 00:27:54.051 [2024-12-06 13:21:42.463986] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:54.051 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:54.051 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:54.051 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:54.051 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:54.052 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:54.310 ************************************ 00:27:54.310 END TEST nvmf_host_management 00:27:54.310 ************************************ 00:27:54.310 00:27:54.310 real 0m5.837s 00:27:54.310 user 0m19.305s 00:27:54.310 sys 0m2.818s 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:54.310 ************************************ 00:27:54.310 START TEST nvmf_lvol 00:27:54.310 ************************************ 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:54.310 * Looking for test storage... 00:27:54.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:54.310 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:54.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.569 --rc genhtml_branch_coverage=1 00:27:54.569 --rc genhtml_function_coverage=1 00:27:54.569 --rc genhtml_legend=1 00:27:54.569 --rc geninfo_all_blocks=1 00:27:54.569 --rc geninfo_unexecuted_blocks=1 00:27:54.569 00:27:54.569 ' 00:27:54.569 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:54.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.569 --rc genhtml_branch_coverage=1 00:27:54.570 --rc genhtml_function_coverage=1 00:27:54.570 --rc genhtml_legend=1 00:27:54.570 --rc geninfo_all_blocks=1 00:27:54.570 --rc geninfo_unexecuted_blocks=1 00:27:54.570 00:27:54.570 ' 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:54.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.570 --rc genhtml_branch_coverage=1 00:27:54.570 --rc genhtml_function_coverage=1 00:27:54.570 --rc genhtml_legend=1 00:27:54.570 --rc geninfo_all_blocks=1 00:27:54.570 --rc geninfo_unexecuted_blocks=1 00:27:54.570 00:27:54.570 ' 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:54.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.570 --rc genhtml_branch_coverage=1 00:27:54.570 --rc genhtml_function_coverage=1 00:27:54.570 --rc genhtml_legend=1 00:27:54.570 --rc geninfo_all_blocks=1 00:27:54.570 --rc geninfo_unexecuted_blocks=1 00:27:54.570 00:27:54.570 ' 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:54.570 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:54.571 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:54.571 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:54.571 13:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:54.571 Cannot find device "nvmf_init_br" 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:54.571 Cannot find device "nvmf_init_br2" 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:54.571 Cannot find device "nvmf_tgt_br" 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:54.571 Cannot find device "nvmf_tgt_br2" 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:54.571 Cannot find device "nvmf_init_br" 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:54.571 Cannot find device "nvmf_init_br2" 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:54.571 Cannot find device "nvmf_tgt_br" 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:54.571 Cannot find device "nvmf_tgt_br2" 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:54.571 Cannot find device "nvmf_br" 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:54.571 Cannot find device "nvmf_init_if" 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:54.571 Cannot find device "nvmf_init_if2" 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:54.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:54.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:54.571 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:54.829 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:54.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:54.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:27:54.830 00:27:54.830 --- 10.0.0.3 ping statistics --- 00:27:54.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.830 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:54.830 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:54.830 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.097 ms 00:27:54.830 00:27:54.830 --- 10.0.0.4 ping statistics --- 00:27:54.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.830 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:54.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:27:54.830 00:27:54.830 --- 10.0.0.1 ping statistics --- 00:27:54.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.830 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:54.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:27:54.830 00:27:54.830 --- 10.0.0.2 ping statistics --- 00:27:54.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.830 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=102345 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 102345 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 102345 ']' 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:54.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:54.830 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:55.089 [2024-12-06 13:21:43.468046] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:55.089 [2024-12-06 13:21:43.469406] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:27:55.089 [2024-12-06 13:21:43.469520] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.089 [2024-12-06 13:21:43.624814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:55.089 [2024-12-06 13:21:43.704476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.089 [2024-12-06 13:21:43.704558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.089 [2024-12-06 13:21:43.704572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.089 [2024-12-06 13:21:43.704583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.089 [2024-12-06 13:21:43.704592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.089 [2024-12-06 13:21:43.706128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.089 [2024-12-06 13:21:43.706314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:55.089 [2024-12-06 13:21:43.706326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.347 [2024-12-06 13:21:43.839833] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:55.347 [2024-12-06 13:21:43.840179] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:55.347 [2024-12-06 13:21:43.840389] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:55.347 [2024-12-06 13:21:43.840571] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:55.931 13:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.931 13:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:27:55.931 13:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:55.931 13:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:55.931 13:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:55.931 13:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.931 13:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:56.189 [2024-12-06 13:21:44.719860] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:56.189 13:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:56.813 13:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:27:56.813 13:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:57.071 13:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:27:57.071 13:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:27:57.328 13:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:27:57.587 13:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2a59ed7c-5f31-4191-9adf-c7bc06d4f9ba 00:27:57.587 13:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a59ed7c-5f31-4191-9adf-c7bc06d4f9ba lvol 20 00:27:57.845 13:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=63a88bde-84de-4585-bedf-46fd1c9e4cb9 00:27:57.845 13:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:58.114 13:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 63a88bde-84de-4585-bedf-46fd1c9e4cb9 00:27:58.371 13:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:58.629 [2024-12-06 13:21:47.091835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:58.629 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:27:58.887 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=102493 00:27:58.887 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:27:58.887 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:27:59.819 13:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 63a88bde-84de-4585-bedf-46fd1c9e4cb9 MY_SNAPSHOT 00:28:00.385 13:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5e4d86c4-edeb-4d27-9de7-5ad0dbf7cae2 00:28:00.385 13:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 63a88bde-84de-4585-bedf-46fd1c9e4cb9 30 00:28:00.644 13:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5e4d86c4-edeb-4d27-9de7-5ad0dbf7cae2 MY_CLONE 00:28:00.902 13:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a91bd280-a275-4016-9732-be40e8cedf9b 00:28:00.902 13:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate a91bd280-a275-4016-9732-be40e8cedf9b 00:28:01.834 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 102493 00:28:09.960 Initializing NVMe Controllers 00:28:09.960 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:28:09.960 Controller IO queue size 128, less than required. 00:28:09.960 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.960 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:09.960 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:09.960 Initialization complete. Launching workers. 00:28:09.960 ======================================================== 00:28:09.960 Latency(us) 00:28:09.960 Device Information : IOPS MiB/s Average min max 00:28:09.960 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 6777.90 26.48 18891.82 5051.28 71139.79 00:28:09.960 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 6917.80 27.02 18505.37 6047.80 73452.19 00:28:09.960 ======================================================== 00:28:09.960 Total : 13695.70 53.50 18696.62 5051.28 73452.19 00:28:09.960 00:28:09.960 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:09.960 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 63a88bde-84de-4585-bedf-46fd1c9e4cb9 00:28:09.960 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a59ed7c-5f31-4191-9adf-c7bc06d4f9ba 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:10.219 rmmod nvme_tcp 00:28:10.219 rmmod nvme_fabrics 00:28:10.219 rmmod nvme_keyring 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 102345 ']' 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 102345 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 102345 ']' 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 102345 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102345 00:28:10.219 killing process with pid 102345 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102345' 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 102345 00:28:10.219 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 102345 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:28:10.786 00:28:10.786 real 0m16.602s 00:28:10.786 user 0m57.268s 00:28:10.786 sys 0m5.537s 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:10.786 ************************************ 00:28:10.786 END TEST nvmf_lvol 00:28:10.786 ************************************ 00:28:10.786 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:11.045 ************************************ 00:28:11.045 START TEST nvmf_lvs_grow 00:28:11.045 ************************************ 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:11.045 * Looking for test storage... 00:28:11.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.045 --rc genhtml_branch_coverage=1 00:28:11.045 --rc genhtml_function_coverage=1 00:28:11.045 --rc genhtml_legend=1 00:28:11.045 --rc geninfo_all_blocks=1 00:28:11.045 --rc geninfo_unexecuted_blocks=1 00:28:11.045 00:28:11.045 ' 00:28:11.045 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.045 --rc genhtml_branch_coverage=1 00:28:11.045 --rc genhtml_function_coverage=1 00:28:11.045 --rc genhtml_legend=1 00:28:11.045 --rc geninfo_all_blocks=1 00:28:11.046 --rc geninfo_unexecuted_blocks=1 00:28:11.046 00:28:11.046 ' 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:11.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.046 --rc genhtml_branch_coverage=1 00:28:11.046 --rc genhtml_function_coverage=1 00:28:11.046 --rc genhtml_legend=1 00:28:11.046 --rc geninfo_all_blocks=1 00:28:11.046 --rc geninfo_unexecuted_blocks=1 00:28:11.046 00:28:11.046 ' 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:11.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.046 --rc genhtml_branch_coverage=1 00:28:11.046 --rc genhtml_function_coverage=1 00:28:11.046 --rc genhtml_legend=1 00:28:11.046 --rc geninfo_all_blocks=1 00:28:11.046 --rc geninfo_unexecuted_blocks=1 00:28:11.046 00:28:11.046 ' 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:11.046 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:11.306 Cannot find device "nvmf_init_br" 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:11.306 Cannot find device "nvmf_init_br2" 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:11.306 Cannot find device "nvmf_tgt_br" 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:11.306 Cannot find device "nvmf_tgt_br2" 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:11.306 Cannot find device "nvmf_init_br" 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:11.306 Cannot find device "nvmf_init_br2" 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:11.306 Cannot find device "nvmf_tgt_br" 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:11.306 Cannot find device "nvmf_tgt_br2" 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:11.306 Cannot find device "nvmf_br" 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:11.306 Cannot find device "nvmf_init_if" 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:11.306 Cannot find device "nvmf_init_if2" 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:11.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:11.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:11.306 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:11.566 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:11.566 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:11.566 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:11.566 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:11.566 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:11.566 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:11.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:11.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:28:11.566 00:28:11.566 --- 10.0.0.3 ping statistics --- 00:28:11.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.566 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:11.566 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:11.566 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:28:11.566 00:28:11.566 --- 10.0.0.4 ping statistics --- 00:28:11.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.566 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:11.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:28:11.566 00:28:11.566 --- 10.0.0.1 ping statistics --- 00:28:11.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.566 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:11.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:28:11.566 00:28:11.566 --- 10.0.0.2 ping statistics --- 00:28:11.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.566 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=102912 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 102912 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 102912 ']' 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.566 13:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:11.566 [2024-12-06 13:22:00.130488] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:11.566 [2024-12-06 13:22:00.131758] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:28:11.566 [2024-12-06 13:22:00.131835] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.826 [2024-12-06 13:22:00.281605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.826 [2024-12-06 13:22:00.350385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.826 [2024-12-06 13:22:00.350447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.826 [2024-12-06 13:22:00.350476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.826 [2024-12-06 13:22:00.350487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.826 [2024-12-06 13:22:00.350496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.826 [2024-12-06 13:22:00.350979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.826 [2024-12-06 13:22:00.454054] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:11.826 [2024-12-06 13:22:00.454508] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:12.763 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.763 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:28:12.763 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.763 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.763 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:12.763 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.763 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:13.022 [2024-12-06 13:22:01.579707] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:13.022 ************************************ 00:28:13.022 START TEST lvs_grow_clean 00:28:13.022 ************************************ 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:28:13.022 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:13.282 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:13.282 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:13.849 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e1c2a3ef-258a-4fab-9af2-8221ce08b81b 00:28:13.849 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1c2a3ef-258a-4fab-9af2-8221ce08b81b 00:28:13.849 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:14.119 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:14.119 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:14.119 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e1c2a3ef-258a-4fab-9af2-8221ce08b81b lvol 150 00:28:14.377 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4644f27f-96c2-4a40-becc-1899954ac27a 00:28:14.377 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:28:14.377 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:14.636 [2024-12-06 13:22:03.027708] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:14.636 [2024-12-06 13:22:03.027868] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:14.636 true 00:28:14.636 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1c2a3ef-258a-4fab-9af2-8221ce08b81b 00:28:14.636 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:14.894 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:14.894 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:15.153 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4644f27f-96c2-4a40-becc-1899954ac27a 00:28:15.412 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:15.671 [2024-12-06 13:22:04.136204] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:15.671 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:28:15.930 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:15.930 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=103073 00:28:15.930 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:15.930 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 103073 /var/tmp/bdevperf.sock 00:28:15.930 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 103073 ']' 00:28:15.930 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:15.930 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:15.930 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:15.930 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.930 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.930 [2024-12-06 13:22:04.468542] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:28:15.930 [2024-12-06 13:22:04.468655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103073 ] 00:28:16.189 [2024-12-06 13:22:04.616864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.189 [2024-12-06 13:22:04.677339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.189 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.189 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:28:16.189 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:16.755 Nvme0n1 00:28:16.755 13:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:17.013 [ 00:28:17.013 { 00:28:17.013 "aliases": [ 00:28:17.013 "4644f27f-96c2-4a40-becc-1899954ac27a" 00:28:17.013 ], 00:28:17.013 "assigned_rate_limits": { 00:28:17.013 "r_mbytes_per_sec": 0, 00:28:17.013 "rw_ios_per_sec": 0, 00:28:17.013 "rw_mbytes_per_sec": 0, 00:28:17.013 "w_mbytes_per_sec": 0 00:28:17.013 }, 00:28:17.013 "block_size": 4096, 00:28:17.013 "claimed": false, 00:28:17.013 "driver_specific": { 00:28:17.013 "mp_policy": "active_passive", 00:28:17.013 "nvme": [ 00:28:17.013 { 00:28:17.013 "ctrlr_data": { 00:28:17.013 "ana_reporting": false, 00:28:17.013 "cntlid": 1, 00:28:17.013 "firmware_revision": "25.01", 00:28:17.013 "model_number": "SPDK bdev Controller", 00:28:17.013 "multi_ctrlr": true, 00:28:17.013 "oacs": { 00:28:17.013 "firmware": 0, 00:28:17.013 "format": 0, 00:28:17.013 "ns_manage": 0, 00:28:17.013 "security": 0 00:28:17.013 }, 00:28:17.013 "serial_number": "SPDK0", 00:28:17.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:17.013 "vendor_id": "0x8086" 00:28:17.013 }, 00:28:17.013 "ns_data": { 00:28:17.013 "can_share": true, 00:28:17.013 "id": 1 00:28:17.013 }, 00:28:17.013 "trid": { 00:28:17.013 "adrfam": "IPv4", 00:28:17.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:17.013 "traddr": "10.0.0.3", 00:28:17.013 "trsvcid": "4420", 00:28:17.013 "trtype": "TCP" 00:28:17.013 }, 00:28:17.013 "vs": { 00:28:17.013 "nvme_version": "1.3" 00:28:17.013 } 00:28:17.013 } 00:28:17.013 ] 00:28:17.013 }, 00:28:17.013 "memory_domains": [ 00:28:17.013 { 00:28:17.013 "dma_device_id": "system", 00:28:17.013 "dma_device_type": 1 00:28:17.013 } 00:28:17.013 ], 00:28:17.013 "name": "Nvme0n1", 00:28:17.013 "num_blocks": 38912, 00:28:17.013 "numa_id": -1, 00:28:17.013 "product_name": "NVMe disk", 00:28:17.013 "supported_io_types": { 00:28:17.013 "abort": true, 00:28:17.014 "compare": true, 00:28:17.014 "compare_and_write": true, 00:28:17.014 "copy": true, 00:28:17.014 "flush": true, 00:28:17.014 "get_zone_info": false, 00:28:17.014 "nvme_admin": true, 00:28:17.014 "nvme_io": true, 00:28:17.014 "nvme_io_md": false, 00:28:17.014 "nvme_iov_md": false, 00:28:17.014 "read": true, 00:28:17.014 "reset": true, 00:28:17.014 "seek_data": false, 00:28:17.014 "seek_hole": false, 00:28:17.014 "unmap": true, 00:28:17.014 "write": true, 00:28:17.014 "write_zeroes": true, 00:28:17.014 "zcopy": false, 00:28:17.014 "zone_append": false, 00:28:17.014 "zone_management": false 00:28:17.014 }, 00:28:17.014 "uuid": "4644f27f-96c2-4a40-becc-1899954ac27a", 00:28:17.014 "zoned": false 00:28:17.014 } 00:28:17.014 ] 00:28:17.014 13:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:17.014 13:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=103107 00:28:17.014 13:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:17.014 Running I/O for 10 seconds... 00:28:17.953 Latency(us) 00:28:17.953 [2024-12-06T13:22:06.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:17.953 Nvme0n1 : 1.00 7192.00 28.09 0.00 0.00 0.00 0.00 0.00 00:28:17.953 [2024-12-06T13:22:06.593Z] =================================================================================================================== 00:28:17.953 [2024-12-06T13:22:06.593Z] Total : 7192.00 28.09 0.00 0.00 0.00 0.00 0.00 00:28:17.953 00:28:18.892 13:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e1c2a3ef-258a-4fab-9af2-8221ce08b81b 00:28:19.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:19.151 Nvme0n1 : 2.00 7596.50 29.67 0.00 0.00 0.00 0.00 0.00 00:28:19.151 [2024-12-06T13:22:07.791Z] =================================================================================================================== 00:28:19.151 [2024-12-06T13:22:07.791Z] Total : 7596.50 29.67 0.00 0.00 0.00 0.00 0.00 00:28:19.151 00:28:19.151 true 00:28:19.409 13:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1c2a3ef-258a-4fab-9af2-8221ce08b81b 00:28:19.409 13:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:19.667 13:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:19.667 13:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:19.667 13:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 103107 00:28:19.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:19.926 Nvme0n1 : 3.00 7814.67 30.53 0.00 0.00 0.00 0.00 0.00 00:28:19.926 [2024-12-06T13:22:08.566Z] =================================================================================================================== 00:28:19.926 [2024-12-06T13:22:08.566Z] Total : 7814.67 30.53 0.00 0.00 0.00 0.00 0.00 00:28:19.926 00:28:21.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:21.301 Nvme0n1 : 4.00 7815.25 30.53 0.00 0.00 0.00 0.00 0.00 00:28:21.301 [2024-12-06T13:22:09.941Z] =================================================================================================================== 00:28:21.301 [2024-12-06T13:22:09.941Z] Total : 7815.25 30.53 0.00 0.00 0.00 0.00 0.00 00:28:21.301 00:28:22.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:22.233 Nvme0n1 : 5.00 7753.00 30.29 0.00 0.00 0.00 0.00 0.00 00:28:22.233 [2024-12-06T13:22:10.873Z] =================================================================================================================== 00:28:22.233 [2024-12-06T13:22:10.873Z] Total : 7753.00 30.29 0.00 0.00 0.00 0.00 0.00 00:28:22.233 00:28:23.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:23.166 Nvme0n1 : 6.00 7690.17 30.04 0.00 0.00 0.00 0.00 0.00 00:28:23.166 [2024-12-06T13:22:11.806Z] =================================================================================================================== 00:28:23.166 [2024-12-06T13:22:11.806Z] Total : 7690.17 30.04 0.00 0.00 0.00 0.00 0.00 00:28:23.166 00:28:24.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:24.100 Nvme0n1 : 7.00 7697.71 30.07 0.00 0.00 0.00 0.00 0.00 00:28:24.100 [2024-12-06T13:22:12.740Z] =================================================================================================================== 00:28:24.100 [2024-12-06T13:22:12.740Z] Total : 7697.71 30.07 0.00 0.00 0.00 0.00 0.00 00:28:24.100 00:28:25.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:25.034 Nvme0n1 : 8.00 7710.88 30.12 0.00 0.00 0.00 0.00 0.00 00:28:25.035 [2024-12-06T13:22:13.675Z] =================================================================================================================== 00:28:25.035 [2024-12-06T13:22:13.675Z] Total : 7710.88 30.12 0.00 0.00 0.00 0.00 0.00 00:28:25.035 00:28:25.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:25.971 Nvme0n1 : 9.00 7710.56 30.12 0.00 0.00 0.00 0.00 0.00 00:28:25.971 [2024-12-06T13:22:14.611Z] =================================================================================================================== 00:28:25.971 [2024-12-06T13:22:14.611Z] Total : 7710.56 30.12 0.00 0.00 0.00 0.00 0.00 00:28:25.971 00:28:26.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:26.907 Nvme0n1 : 10.00 7673.40 29.97 0.00 0.00 0.00 0.00 0.00 00:28:26.907 [2024-12-06T13:22:15.547Z] =================================================================================================================== 00:28:26.907 [2024-12-06T13:22:15.547Z] Total : 7673.40 29.97 0.00 0.00 0.00 0.00 0.00 00:28:26.907 00:28:27.166 00:28:27.166 Latency(us) 00:28:27.166 [2024-12-06T13:22:15.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:27.166 Nvme0n1 : 10.00 7683.53 30.01 0.00 0.00 16653.87 7983.48 40513.16 00:28:27.166 [2024-12-06T13:22:15.806Z] =================================================================================================================== 00:28:27.166 [2024-12-06T13:22:15.806Z] Total : 7683.53 30.01 0.00 0.00 16653.87 7983.48 40513.16 00:28:27.166 { 00:28:27.166 "results": [ 00:28:27.166 { 00:28:27.166 "job": "Nvme0n1", 00:28:27.166 "core_mask": "0x2", 00:28:27.166 "workload": "randwrite", 00:28:27.166 "status": "finished", 00:28:27.166 "queue_depth": 128, 00:28:27.166 "io_size": 4096, 00:28:27.166 "runtime": 10.003474, 00:28:27.166 "iops": 7683.530741420431, 00:28:27.166 "mibps": 30.013791958673558, 00:28:27.166 "io_failed": 0, 00:28:27.166 "io_timeout": 0, 00:28:27.166 "avg_latency_us": 16653.865630634362, 00:28:27.166 "min_latency_us": 7983.476363636363, 00:28:27.166 "max_latency_us": 40513.163636363635 00:28:27.166 } 00:28:27.166 ], 00:28:27.166 "core_count": 1 00:28:27.166 } 00:28:27.166 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 103073 00:28:27.166 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 103073 ']' 00:28:27.166 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 103073 00:28:27.166 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:28:27.166 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:27.166 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103073 00:28:27.166 killing process with pid 103073 00:28:27.166 Received shutdown signal, test time was about 10.000000 seconds 00:28:27.166 00:28:27.166 Latency(us) 00:28:27.166 [2024-12-06T13:22:15.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.166 [2024-12-06T13:22:15.806Z] =================================================================================================================== 00:28:27.166 [2024-12-06T13:22:15.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.166 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:27.166 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:27.166 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103073' 00:28:27.166 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 103073 00:28:27.166 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 103073 00:28:27.425 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:28:27.683 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:27.940 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1c2a3ef-258a-4fab-9af2-8221ce08b81b 00:28:27.940 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:28.199 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:28.199 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:28.199 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:28.458 [2024-12-06 13:22:16.883665] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:28.458 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1c2a3ef-258a-4fab-9af2-8221ce08b81b 00:28:28.458 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:28:28.458 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1c2a3ef-258a-4fab-9af2-8221ce08b81b 00:28:28.458 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.458 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:28.458 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.458 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:28.458 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.458 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:28.458 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.459 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:28.459 13:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1c2a3ef-258a-4fab-9af2-8221ce08b81b 00:28:28.718 2024/12/06 13:22:17 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:e1c2a3ef-258a-4fab-9af2-8221ce08b81b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:28:28.718 request: 00:28:28.718 { 00:28:28.718 "method": "bdev_lvol_get_lvstores", 00:28:28.718 "params": { 00:28:28.718 "uuid": "e1c2a3ef-258a-4fab-9af2-8221ce08b81b" 00:28:28.718 } 00:28:28.718 } 00:28:28.718 Got JSON-RPC error response 00:28:28.718 GoRPCClient: error on JSON-RPC call 00:28:28.718 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:28:28.718 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:28.718 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:28.718 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:28.718 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:28.977 aio_bdev 00:28:28.977 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4644f27f-96c2-4a40-becc-1899954ac27a 00:28:28.977 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4644f27f-96c2-4a40-becc-1899954ac27a 00:28:28.977 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:28.977 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:28:28.977 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:28.977 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:28.977 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:29.236 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4644f27f-96c2-4a40-becc-1899954ac27a -t 2000 00:28:29.495 [ 00:28:29.495 { 00:28:29.495 "aliases": [ 00:28:29.495 "lvs/lvol" 00:28:29.495 ], 00:28:29.495 "assigned_rate_limits": { 00:28:29.495 "r_mbytes_per_sec": 0, 00:28:29.495 "rw_ios_per_sec": 0, 00:28:29.495 "rw_mbytes_per_sec": 0, 00:28:29.495 "w_mbytes_per_sec": 0 00:28:29.495 }, 00:28:29.495 "block_size": 4096, 00:28:29.495 "claimed": false, 00:28:29.495 "driver_specific": { 00:28:29.495 "lvol": { 00:28:29.495 "base_bdev": "aio_bdev", 00:28:29.495 "clone": false, 00:28:29.495 "esnap_clone": false, 00:28:29.495 "lvol_store_uuid": "e1c2a3ef-258a-4fab-9af2-8221ce08b81b", 00:28:29.495 "num_allocated_clusters": 38, 00:28:29.495 "snapshot": false, 00:28:29.495 "thin_provision": false 00:28:29.495 } 00:28:29.495 }, 00:28:29.495 "name": "4644f27f-96c2-4a40-becc-1899954ac27a", 00:28:29.495 "num_blocks": 38912, 00:28:29.495 "product_name": "Logical Volume", 00:28:29.495 "supported_io_types": { 00:28:29.495 "abort": false, 00:28:29.495 "compare": false, 00:28:29.495 "compare_and_write": false, 00:28:29.495 "copy": false, 00:28:29.495 "flush": false, 00:28:29.495 "get_zone_info": false, 00:28:29.495 "nvme_admin": false, 00:28:29.495 "nvme_io": false, 00:28:29.495 "nvme_io_md": false, 00:28:29.495 "nvme_iov_md": false, 00:28:29.495 "read": true, 00:28:29.495 "reset": true, 00:28:29.495 "seek_data": true, 00:28:29.495 "seek_hole": true, 00:28:29.495 "unmap": true, 00:28:29.495 "write": true, 00:28:29.495 "write_zeroes": true, 00:28:29.495 "zcopy": false, 00:28:29.495 "zone_append": false, 00:28:29.495 "zone_management": false 00:28:29.495 }, 00:28:29.495 "uuid": "4644f27f-96c2-4a40-becc-1899954ac27a", 00:28:29.495 "zoned": false 00:28:29.495 } 00:28:29.495 ] 00:28:29.495 13:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:28:29.495 13:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1c2a3ef-258a-4fab-9af2-8221ce08b81b 00:28:29.496 13:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:29.754 13:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:29.754 13:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1c2a3ef-258a-4fab-9af2-8221ce08b81b 00:28:29.754 13:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:30.013 13:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:30.013 13:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4644f27f-96c2-4a40-becc-1899954ac27a 00:28:30.272 13:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e1c2a3ef-258a-4fab-9af2-8221ce08b81b 00:28:30.531 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:30.790 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:28:31.359 ************************************ 00:28:31.359 END TEST lvs_grow_clean 00:28:31.359 ************************************ 00:28:31.359 00:28:31.359 real 0m18.185s 00:28:31.359 user 0m17.480s 00:28:31.359 sys 0m2.213s 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:31.359 ************************************ 00:28:31.359 START TEST lvs_grow_dirty 00:28:31.359 ************************************ 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:28:31.359 13:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:31.631 13:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:31.631 13:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:31.890 13:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:31.890 13:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:31.890 13:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:32.148 13:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:32.148 13:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:32.148 13:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df lvol 150 00:28:32.407 13:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=81055242-aea8-48e6-9fd0-278f792b50e7 00:28:32.407 13:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:28:32.407 13:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:32.666 [2024-12-06 13:22:21.255724] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:32.666 [2024-12-06 13:22:21.255902] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:32.666 true 00:28:32.666 13:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:32.666 13:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:33.234 13:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:33.234 13:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:33.234 13:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 81055242-aea8-48e6-9fd0-278f792b50e7 00:28:33.800 13:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:33.800 [2024-12-06 13:22:22.412094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:33.800 13:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:28:34.368 13:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:34.368 13:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=103497 00:28:34.368 13:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:34.368 13:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 103497 /var/tmp/bdevperf.sock 00:28:34.368 13:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 103497 ']' 00:28:34.368 13:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:34.368 13:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:34.368 13:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:34.368 13:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.368 13:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:34.368 [2024-12-06 13:22:22.782245] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:28:34.368 [2024-12-06 13:22:22.782424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103497 ] 00:28:34.368 [2024-12-06 13:22:22.941785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.627 [2024-12-06 13:22:23.014745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.193 13:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.193 13:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:35.193 13:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:35.451 Nvme0n1 00:28:35.451 13:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:35.709 [ 00:28:35.709 { 00:28:35.709 "aliases": [ 00:28:35.709 "81055242-aea8-48e6-9fd0-278f792b50e7" 00:28:35.709 ], 00:28:35.709 "assigned_rate_limits": { 00:28:35.709 "r_mbytes_per_sec": 0, 00:28:35.709 "rw_ios_per_sec": 0, 00:28:35.709 "rw_mbytes_per_sec": 0, 00:28:35.709 "w_mbytes_per_sec": 0 00:28:35.709 }, 00:28:35.709 "block_size": 4096, 00:28:35.709 "claimed": false, 00:28:35.709 "driver_specific": { 00:28:35.709 "mp_policy": "active_passive", 00:28:35.709 "nvme": [ 00:28:35.709 { 00:28:35.709 "ctrlr_data": { 00:28:35.709 "ana_reporting": false, 00:28:35.709 "cntlid": 1, 00:28:35.709 "firmware_revision": "25.01", 00:28:35.709 "model_number": "SPDK bdev Controller", 00:28:35.709 "multi_ctrlr": true, 00:28:35.709 "oacs": { 00:28:35.709 "firmware": 0, 00:28:35.709 "format": 0, 00:28:35.709 "ns_manage": 0, 00:28:35.709 "security": 0 00:28:35.709 }, 00:28:35.709 "serial_number": "SPDK0", 00:28:35.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.709 "vendor_id": "0x8086" 00:28:35.709 }, 00:28:35.709 "ns_data": { 00:28:35.709 "can_share": true, 00:28:35.709 "id": 1 00:28:35.709 }, 00:28:35.709 "trid": { 00:28:35.709 "adrfam": "IPv4", 00:28:35.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.709 "traddr": "10.0.0.3", 00:28:35.709 "trsvcid": "4420", 00:28:35.709 "trtype": "TCP" 00:28:35.709 }, 00:28:35.709 "vs": { 00:28:35.709 "nvme_version": "1.3" 00:28:35.709 } 00:28:35.709 } 00:28:35.709 ] 00:28:35.709 }, 00:28:35.709 "memory_domains": [ 00:28:35.709 { 00:28:35.709 "dma_device_id": "system", 00:28:35.709 "dma_device_type": 1 00:28:35.709 } 00:28:35.709 ], 00:28:35.709 "name": "Nvme0n1", 00:28:35.709 "num_blocks": 38912, 00:28:35.709 "numa_id": -1, 00:28:35.709 "product_name": "NVMe disk", 00:28:35.709 "supported_io_types": { 00:28:35.709 "abort": true, 00:28:35.709 "compare": true, 00:28:35.709 "compare_and_write": true, 00:28:35.709 "copy": true, 00:28:35.709 "flush": true, 00:28:35.709 "get_zone_info": false, 00:28:35.709 "nvme_admin": true, 00:28:35.709 "nvme_io": true, 00:28:35.709 "nvme_io_md": false, 00:28:35.709 "nvme_iov_md": false, 00:28:35.709 "read": true, 00:28:35.709 "reset": true, 00:28:35.709 "seek_data": false, 00:28:35.709 "seek_hole": false, 00:28:35.709 "unmap": true, 00:28:35.709 "write": true, 00:28:35.709 "write_zeroes": true, 00:28:35.709 "zcopy": false, 00:28:35.709 "zone_append": false, 00:28:35.709 "zone_management": false 00:28:35.709 }, 00:28:35.709 "uuid": "81055242-aea8-48e6-9fd0-278f792b50e7", 00:28:35.709 "zoned": false 00:28:35.709 } 00:28:35.709 ] 00:28:35.709 13:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=103545 00:28:35.709 13:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:35.968 13:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:35.968 Running I/O for 10 seconds... 00:28:36.908 Latency(us) 00:28:36.908 [2024-12-06T13:22:25.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:36.908 Nvme0n1 : 1.00 7470.00 29.18 0.00 0.00 0.00 0.00 0.00 00:28:36.908 [2024-12-06T13:22:25.548Z] =================================================================================================================== 00:28:36.908 [2024-12-06T13:22:25.548Z] Total : 7470.00 29.18 0.00 0.00 0.00 0.00 0.00 00:28:36.908 00:28:37.840 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:37.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:37.840 Nvme0n1 : 2.00 7957.50 31.08 0.00 0.00 0.00 0.00 0.00 00:28:37.840 [2024-12-06T13:22:26.480Z] =================================================================================================================== 00:28:37.840 [2024-12-06T13:22:26.480Z] Total : 7957.50 31.08 0.00 0.00 0.00 0.00 0.00 00:28:37.840 00:28:38.096 true 00:28:38.096 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:38.096 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:38.661 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:38.661 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:38.661 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 103545 00:28:38.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:38.920 Nvme0n1 : 3.00 8083.67 31.58 0.00 0.00 0.00 0.00 0.00 00:28:38.920 [2024-12-06T13:22:27.560Z] =================================================================================================================== 00:28:38.920 [2024-12-06T13:22:27.560Z] Total : 8083.67 31.58 0.00 0.00 0.00 0.00 0.00 00:28:38.920 00:28:39.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:39.854 Nvme0n1 : 4.00 8125.75 31.74 0.00 0.00 0.00 0.00 0.00 00:28:39.854 [2024-12-06T13:22:28.494Z] =================================================================================================================== 00:28:39.854 [2024-12-06T13:22:28.494Z] Total : 8125.75 31.74 0.00 0.00 0.00 0.00 0.00 00:28:39.854 00:28:41.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:41.229 Nvme0n1 : 5.00 8138.40 31.79 0.00 0.00 0.00 0.00 0.00 00:28:41.229 [2024-12-06T13:22:29.869Z] =================================================================================================================== 00:28:41.229 [2024-12-06T13:22:29.869Z] Total : 8138.40 31.79 0.00 0.00 0.00 0.00 0.00 00:28:41.229 00:28:42.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:42.165 Nvme0n1 : 6.00 8025.50 31.35 0.00 0.00 0.00 0.00 0.00 00:28:42.165 [2024-12-06T13:22:30.805Z] =================================================================================================================== 00:28:42.165 [2024-12-06T13:22:30.805Z] Total : 8025.50 31.35 0.00 0.00 0.00 0.00 0.00 00:28:42.165 00:28:43.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:43.100 Nvme0n1 : 7.00 7931.29 30.98 0.00 0.00 0.00 0.00 0.00 00:28:43.100 [2024-12-06T13:22:31.740Z] =================================================================================================================== 00:28:43.100 [2024-12-06T13:22:31.740Z] Total : 7931.29 30.98 0.00 0.00 0.00 0.00 0.00 00:28:43.100 00:28:44.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:44.038 Nvme0n1 : 8.00 7873.62 30.76 0.00 0.00 0.00 0.00 0.00 00:28:44.038 [2024-12-06T13:22:32.678Z] =================================================================================================================== 00:28:44.038 [2024-12-06T13:22:32.678Z] Total : 7873.62 30.76 0.00 0.00 0.00 0.00 0.00 00:28:44.038 00:28:44.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:44.974 Nvme0n1 : 9.00 7817.56 30.54 0.00 0.00 0.00 0.00 0.00 00:28:44.974 [2024-12-06T13:22:33.614Z] =================================================================================================================== 00:28:44.974 [2024-12-06T13:22:33.614Z] Total : 7817.56 30.54 0.00 0.00 0.00 0.00 0.00 00:28:44.974 00:28:45.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:45.911 Nvme0n1 : 10.00 7766.60 30.34 0.00 0.00 0.00 0.00 0.00 00:28:45.911 [2024-12-06T13:22:34.551Z] =================================================================================================================== 00:28:45.911 [2024-12-06T13:22:34.551Z] Total : 7766.60 30.34 0.00 0.00 0.00 0.00 0.00 00:28:45.911 00:28:45.911 00:28:45.911 Latency(us) 00:28:45.911 [2024-12-06T13:22:34.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:45.911 Nvme0n1 : 10.01 7770.01 30.35 0.00 0.00 16468.04 6911.07 82456.20 00:28:45.911 [2024-12-06T13:22:34.551Z] =================================================================================================================== 00:28:45.911 [2024-12-06T13:22:34.551Z] Total : 7770.01 30.35 0.00 0.00 16468.04 6911.07 82456.20 00:28:45.911 { 00:28:45.911 "results": [ 00:28:45.911 { 00:28:45.911 "job": "Nvme0n1", 00:28:45.911 "core_mask": "0x2", 00:28:45.911 "workload": "randwrite", 00:28:45.911 "status": "finished", 00:28:45.911 "queue_depth": 128, 00:28:45.911 "io_size": 4096, 00:28:45.911 "runtime": 10.012083, 00:28:45.911 "iops": 7770.011495110458, 00:28:45.911 "mibps": 30.351607402775226, 00:28:45.911 "io_failed": 0, 00:28:45.911 "io_timeout": 0, 00:28:45.911 "avg_latency_us": 16468.041391927865, 00:28:45.911 "min_latency_us": 6911.069090909091, 00:28:45.911 "max_latency_us": 82456.20363636364 00:28:45.911 } 00:28:45.911 ], 00:28:45.911 "core_count": 1 00:28:45.911 } 00:28:45.911 13:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 103497 00:28:45.911 13:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 103497 ']' 00:28:45.911 13:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 103497 00:28:45.911 13:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:28:45.911 13:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.911 13:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103497 00:28:45.911 killing process with pid 103497 00:28:45.911 Received shutdown signal, test time was about 10.000000 seconds 00:28:45.911 00:28:45.911 Latency(us) 00:28:45.911 [2024-12-06T13:22:34.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.911 [2024-12-06T13:22:34.551Z] =================================================================================================================== 00:28:45.911 [2024-12-06T13:22:34.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.911 13:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:45.911 13:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:45.911 13:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103497' 00:28:45.911 13:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 103497 00:28:45.911 13:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 103497 00:28:46.170 13:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:28:46.429 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:46.686 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:46.686 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:47.251 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:47.251 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:47.251 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 102912 00:28:47.251 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 102912 00:28:47.251 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 102912 Killed "${NVMF_APP[@]}" "$@" 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=103705 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 103705 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 103705 ']' 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.252 13:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:47.252 [2024-12-06 13:22:35.768775] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:47.252 [2024-12-06 13:22:35.769955] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:28:47.252 [2024-12-06 13:22:35.770022] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.511 [2024-12-06 13:22:35.922293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.511 [2024-12-06 13:22:35.990703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.511 [2024-12-06 13:22:35.990789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.511 [2024-12-06 13:22:35.990816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.511 [2024-12-06 13:22:35.990827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.511 [2024-12-06 13:22:35.990836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.511 [2024-12-06 13:22:35.991319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.511 [2024-12-06 13:22:36.095431] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:47.511 [2024-12-06 13:22:36.095856] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:48.446 13:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.446 13:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:48.446 13:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:48.446 13:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:48.446 13:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:48.446 13:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.446 13:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:48.706 [2024-12-06 13:22:37.134135] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:48.706 [2024-12-06 13:22:37.134771] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:48.706 [2024-12-06 13:22:37.135112] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:48.706 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:48.706 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 81055242-aea8-48e6-9fd0-278f792b50e7 00:28:48.706 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=81055242-aea8-48e6-9fd0-278f792b50e7 00:28:48.706 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:48.706 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:48.706 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:48.706 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:48.706 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:48.964 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81055242-aea8-48e6-9fd0-278f792b50e7 -t 2000 00:28:49.223 [ 00:28:49.223 { 00:28:49.223 "aliases": [ 00:28:49.223 "lvs/lvol" 00:28:49.223 ], 00:28:49.223 "assigned_rate_limits": { 00:28:49.223 "r_mbytes_per_sec": 0, 00:28:49.223 "rw_ios_per_sec": 0, 00:28:49.223 "rw_mbytes_per_sec": 0, 00:28:49.223 "w_mbytes_per_sec": 0 00:28:49.223 }, 00:28:49.223 "block_size": 4096, 00:28:49.223 "claimed": false, 00:28:49.223 "driver_specific": { 00:28:49.223 "lvol": { 00:28:49.223 "base_bdev": "aio_bdev", 00:28:49.223 "clone": false, 00:28:49.223 "esnap_clone": false, 00:28:49.223 "lvol_store_uuid": "6c1ece79-23eb-4cd6-9164-4cfc6533e3df", 00:28:49.223 "num_allocated_clusters": 38, 00:28:49.223 "snapshot": false, 00:28:49.223 "thin_provision": false 00:28:49.223 } 00:28:49.223 }, 00:28:49.223 "name": "81055242-aea8-48e6-9fd0-278f792b50e7", 00:28:49.223 "num_blocks": 38912, 00:28:49.223 "product_name": "Logical Volume", 00:28:49.223 "supported_io_types": { 00:28:49.223 "abort": false, 00:28:49.223 "compare": false, 00:28:49.223 "compare_and_write": false, 00:28:49.223 "copy": false, 00:28:49.223 "flush": false, 00:28:49.223 "get_zone_info": false, 00:28:49.223 "nvme_admin": false, 00:28:49.223 "nvme_io": false, 00:28:49.223 "nvme_io_md": false, 00:28:49.223 "nvme_iov_md": false, 00:28:49.223 "read": true, 00:28:49.223 "reset": true, 00:28:49.223 "seek_data": true, 00:28:49.223 "seek_hole": true, 00:28:49.223 "unmap": true, 00:28:49.223 "write": true, 00:28:49.223 "write_zeroes": true, 00:28:49.223 "zcopy": false, 00:28:49.223 "zone_append": false, 00:28:49.223 "zone_management": false 00:28:49.223 }, 00:28:49.223 "uuid": "81055242-aea8-48e6-9fd0-278f792b50e7", 00:28:49.223 "zoned": false 00:28:49.223 } 00:28:49.223 ] 00:28:49.223 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:49.223 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:49.223 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:49.481 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:49.481 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:49.481 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:50.047 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:50.047 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:50.311 [2024-12-06 13:22:38.693224] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:50.311 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:50.311 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:28:50.311 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:50.311 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:50.311 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.311 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:50.311 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.311 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:50.311 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.311 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:50.311 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:50.311 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:50.583 2024/12/06 13:22:39 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:6c1ece79-23eb-4cd6-9164-4cfc6533e3df], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:28:50.583 request: 00:28:50.583 { 00:28:50.583 "method": "bdev_lvol_get_lvstores", 00:28:50.583 "params": { 00:28:50.583 "uuid": "6c1ece79-23eb-4cd6-9164-4cfc6533e3df" 00:28:50.583 } 00:28:50.583 } 00:28:50.583 Got JSON-RPC error response 00:28:50.583 GoRPCClient: error on JSON-RPC call 00:28:50.583 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:28:50.583 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:50.583 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:50.583 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:50.583 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:50.841 aio_bdev 00:28:50.841 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 81055242-aea8-48e6-9fd0-278f792b50e7 00:28:50.841 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=81055242-aea8-48e6-9fd0-278f792b50e7 00:28:50.841 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:50.841 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:50.841 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:50.841 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:50.841 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:51.099 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81055242-aea8-48e6-9fd0-278f792b50e7 -t 2000 00:28:51.357 [ 00:28:51.357 { 00:28:51.357 "aliases": [ 00:28:51.357 "lvs/lvol" 00:28:51.357 ], 00:28:51.357 "assigned_rate_limits": { 00:28:51.357 "r_mbytes_per_sec": 0, 00:28:51.357 "rw_ios_per_sec": 0, 00:28:51.357 "rw_mbytes_per_sec": 0, 00:28:51.357 "w_mbytes_per_sec": 0 00:28:51.357 }, 00:28:51.357 "block_size": 4096, 00:28:51.357 "claimed": false, 00:28:51.357 "driver_specific": { 00:28:51.357 "lvol": { 00:28:51.357 "base_bdev": "aio_bdev", 00:28:51.357 "clone": false, 00:28:51.357 "esnap_clone": false, 00:28:51.357 "lvol_store_uuid": "6c1ece79-23eb-4cd6-9164-4cfc6533e3df", 00:28:51.357 "num_allocated_clusters": 38, 00:28:51.357 "snapshot": false, 00:28:51.357 "thin_provision": false 00:28:51.357 } 00:28:51.357 }, 00:28:51.357 "name": "81055242-aea8-48e6-9fd0-278f792b50e7", 00:28:51.357 "num_blocks": 38912, 00:28:51.357 "product_name": "Logical Volume", 00:28:51.357 "supported_io_types": { 00:28:51.357 "abort": false, 00:28:51.357 "compare": false, 00:28:51.357 "compare_and_write": false, 00:28:51.357 "copy": false, 00:28:51.357 "flush": false, 00:28:51.357 "get_zone_info": false, 00:28:51.357 "nvme_admin": false, 00:28:51.357 "nvme_io": false, 00:28:51.357 "nvme_io_md": false, 00:28:51.357 "nvme_iov_md": false, 00:28:51.357 "read": true, 00:28:51.357 "reset": true, 00:28:51.357 "seek_data": true, 00:28:51.357 "seek_hole": true, 00:28:51.357 "unmap": true, 00:28:51.357 "write": true, 00:28:51.357 "write_zeroes": true, 00:28:51.357 "zcopy": false, 00:28:51.357 "zone_append": false, 00:28:51.357 "zone_management": false 00:28:51.357 }, 00:28:51.357 "uuid": "81055242-aea8-48e6-9fd0-278f792b50e7", 00:28:51.357 "zoned": false 00:28:51.357 } 00:28:51.357 ] 00:28:51.357 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:51.358 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:51.358 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:51.924 13:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:51.924 13:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:51.924 13:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:52.182 13:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:52.182 13:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 81055242-aea8-48e6-9fd0-278f792b50e7 00:28:52.440 13:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c1ece79-23eb-4cd6-9164-4cfc6533e3df 00:28:52.698 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:52.956 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:28:53.520 00:28:53.520 real 0m22.103s 00:28:53.520 user 0m29.334s 00:28:53.520 sys 0m8.486s 00:28:53.520 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.520 ************************************ 00:28:53.520 END TEST lvs_grow_dirty 00:28:53.520 ************************************ 00:28:53.520 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:53.520 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:53.520 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:28:53.520 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:28:53.520 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:28:53.520 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:53.520 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:28:53.520 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:28:53.520 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:28:53.520 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:53.520 nvmf_trace.0 00:28:53.520 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:28:53.520 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:53.520 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:53.520 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:54.086 rmmod nvme_tcp 00:28:54.086 rmmod nvme_fabrics 00:28:54.086 rmmod nvme_keyring 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 103705 ']' 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 103705 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 103705 ']' 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 103705 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103705 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:54.086 killing process with pid 103705 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103705' 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 103705 00:28:54.086 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 103705 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:54.343 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:54.601 13:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:28:54.601 00:28:54.601 real 0m43.701s 00:28:54.601 user 0m48.482s 00:28:54.601 sys 0m11.857s 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:54.601 ************************************ 00:28:54.601 END TEST nvmf_lvs_grow 00:28:54.601 ************************************ 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:54.601 ************************************ 00:28:54.601 START TEST nvmf_bdev_io_wait 00:28:54.601 ************************************ 00:28:54.601 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:54.860 * Looking for test storage... 00:28:54.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.860 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:54.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.861 --rc genhtml_branch_coverage=1 00:28:54.861 --rc genhtml_function_coverage=1 00:28:54.861 --rc genhtml_legend=1 00:28:54.861 --rc geninfo_all_blocks=1 00:28:54.861 --rc geninfo_unexecuted_blocks=1 00:28:54.861 00:28:54.861 ' 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:54.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.861 --rc genhtml_branch_coverage=1 00:28:54.861 --rc genhtml_function_coverage=1 00:28:54.861 --rc genhtml_legend=1 00:28:54.861 --rc geninfo_all_blocks=1 00:28:54.861 --rc geninfo_unexecuted_blocks=1 00:28:54.861 00:28:54.861 ' 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:54.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.861 --rc genhtml_branch_coverage=1 00:28:54.861 --rc genhtml_function_coverage=1 00:28:54.861 --rc genhtml_legend=1 00:28:54.861 --rc geninfo_all_blocks=1 00:28:54.861 --rc geninfo_unexecuted_blocks=1 00:28:54.861 00:28:54.861 ' 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:54.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.861 --rc genhtml_branch_coverage=1 00:28:54.861 --rc genhtml_function_coverage=1 00:28:54.861 --rc genhtml_legend=1 00:28:54.861 --rc geninfo_all_blocks=1 00:28:54.861 --rc geninfo_unexecuted_blocks=1 00:28:54.861 00:28:54.861 ' 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:54.861 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:54.862 Cannot find device "nvmf_init_br" 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:54.862 Cannot find device "nvmf_init_br2" 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:54.862 Cannot find device "nvmf_tgt_br" 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:54.862 Cannot find device "nvmf_tgt_br2" 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:54.862 Cannot find device "nvmf_init_br" 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:28:54.862 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:55.120 Cannot find device "nvmf_init_br2" 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:55.120 Cannot find device "nvmf_tgt_br" 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:55.120 Cannot find device "nvmf_tgt_br2" 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:55.120 Cannot find device "nvmf_br" 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:55.120 Cannot find device "nvmf_init_if" 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:55.120 Cannot find device "nvmf_init_if2" 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:55.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:55.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:55.120 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:55.378 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:55.378 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:55.378 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:55.378 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:55.378 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:55.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:55.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:28:55.378 00:28:55.378 --- 10.0.0.3 ping statistics --- 00:28:55.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.378 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:28:55.378 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:55.378 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:55.378 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:28:55.378 00:28:55.378 --- 10.0.0.4 ping statistics --- 00:28:55.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.379 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:55.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:28:55.379 00:28:55.379 --- 10.0.0.1 ping statistics --- 00:28:55.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.379 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:55.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:28:55.379 00:28:55.379 --- 10.0.0.2 ping statistics --- 00:28:55.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.379 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=104181 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 104181 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 104181 ']' 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.379 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.379 [2024-12-06 13:22:43.869174] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:55.379 [2024-12-06 13:22:43.870242] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:28:55.379 [2024-12-06 13:22:43.870313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.638 [2024-12-06 13:22:44.026967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:55.638 [2024-12-06 13:22:44.094073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.638 [2024-12-06 13:22:44.094132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.638 [2024-12-06 13:22:44.094147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.638 [2024-12-06 13:22:44.094157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.638 [2024-12-06 13:22:44.094166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.638 [2024-12-06 13:22:44.095356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.638 [2024-12-06 13:22:44.095443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.638 [2024-12-06 13:22:44.095491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.638 [2024-12-06 13:22:44.095495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.638 [2024-12-06 13:22:44.096792] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:55.638 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.638 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:28:55.638 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:55.638 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:55.638 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.638 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.638 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:28:55.638 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.638 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.638 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.638 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:28:55.638 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.638 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.897 [2024-12-06 13:22:44.281271] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:55.897 [2024-12-06 13:22:44.282136] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:55.897 [2024-12-06 13:22:44.282608] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:55.897 [2024-12-06 13:22:44.283170] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.897 [2024-12-06 13:22:44.293034] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.897 Malloc0 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.897 [2024-12-06 13:22:44.361336] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=104220 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=104222 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.897 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.897 { 00:28:55.897 "params": { 00:28:55.897 "name": "Nvme$subsystem", 00:28:55.897 "trtype": "$TEST_TRANSPORT", 00:28:55.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.897 "adrfam": "ipv4", 00:28:55.897 "trsvcid": "$NVMF_PORT", 00:28:55.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.897 "hdgst": ${hdgst:-false}, 00:28:55.897 "ddgst": ${ddgst:-false} 00:28:55.897 }, 00:28:55.897 "method": "bdev_nvme_attach_controller" 00:28:55.898 } 00:28:55.898 EOF 00:28:55.898 )") 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=104224 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.898 { 00:28:55.898 "params": { 00:28:55.898 "name": "Nvme$subsystem", 00:28:55.898 "trtype": "$TEST_TRANSPORT", 00:28:55.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.898 "adrfam": "ipv4", 00:28:55.898 "trsvcid": "$NVMF_PORT", 00:28:55.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.898 "hdgst": ${hdgst:-false}, 00:28:55.898 "ddgst": ${ddgst:-false} 00:28:55.898 }, 00:28:55.898 "method": "bdev_nvme_attach_controller" 00:28:55.898 } 00:28:55.898 EOF 00:28:55.898 )") 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=104227 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.898 { 00:28:55.898 "params": { 00:28:55.898 "name": "Nvme$subsystem", 00:28:55.898 "trtype": "$TEST_TRANSPORT", 00:28:55.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.898 "adrfam": "ipv4", 00:28:55.898 "trsvcid": "$NVMF_PORT", 00:28:55.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.898 "hdgst": ${hdgst:-false}, 00:28:55.898 "ddgst": ${ddgst:-false} 00:28:55.898 }, 00:28:55.898 "method": "bdev_nvme_attach_controller" 00:28:55.898 } 00:28:55.898 EOF 00:28:55.898 )") 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.898 { 00:28:55.898 "params": { 00:28:55.898 "name": "Nvme$subsystem", 00:28:55.898 "trtype": "$TEST_TRANSPORT", 00:28:55.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.898 "adrfam": "ipv4", 00:28:55.898 "trsvcid": "$NVMF_PORT", 00:28:55.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.898 "hdgst": ${hdgst:-false}, 00:28:55.898 "ddgst": ${ddgst:-false} 00:28:55.898 }, 00:28:55.898 "method": "bdev_nvme_attach_controller" 00:28:55.898 } 00:28:55.898 EOF 00:28:55.898 )") 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:55.898 "params": { 00:28:55.898 "name": "Nvme1", 00:28:55.898 "trtype": "tcp", 00:28:55.898 "traddr": "10.0.0.3", 00:28:55.898 "adrfam": "ipv4", 00:28:55.898 "trsvcid": "4420", 00:28:55.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.898 "hdgst": false, 00:28:55.898 "ddgst": false 00:28:55.898 }, 00:28:55.898 "method": "bdev_nvme_attach_controller" 00:28:55.898 }' 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:55.898 "params": { 00:28:55.898 "name": "Nvme1", 00:28:55.898 "trtype": "tcp", 00:28:55.898 "traddr": "10.0.0.3", 00:28:55.898 "adrfam": "ipv4", 00:28:55.898 "trsvcid": "4420", 00:28:55.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.898 "hdgst": false, 00:28:55.898 "ddgst": false 00:28:55.898 }, 00:28:55.898 "method": "bdev_nvme_attach_controller" 00:28:55.898 }' 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:55.898 "params": { 00:28:55.898 "name": "Nvme1", 00:28:55.898 "trtype": "tcp", 00:28:55.898 "traddr": "10.0.0.3", 00:28:55.898 "adrfam": "ipv4", 00:28:55.898 "trsvcid": "4420", 00:28:55.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.898 "hdgst": false, 00:28:55.898 "ddgst": false 00:28:55.898 }, 00:28:55.898 "method": "bdev_nvme_attach_controller" 00:28:55.898 }' 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:55.898 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:55.898 "params": { 00:28:55.898 "name": "Nvme1", 00:28:55.898 "trtype": "tcp", 00:28:55.898 "traddr": "10.0.0.3", 00:28:55.898 "adrfam": "ipv4", 00:28:55.898 "trsvcid": "4420", 00:28:55.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.898 "hdgst": false, 00:28:55.898 "ddgst": false 00:28:55.898 }, 00:28:55.898 "method": "bdev_nvme_attach_controller" 00:28:55.898 }' 00:28:55.898 [2024-12-06 13:22:44.426339] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:28:55.898 [2024-12-06 13:22:44.426340] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:28:55.898 [2024-12-06 13:22:44.426432] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 13:22:44.426433] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --fil.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:55.898 e-prefix=spdk2 --proc-type=auto ] 00:28:55.898 [2024-12-06 13:22:44.441811] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:28:55.898 [2024-12-06 13:22:44.442119] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:28:55.898 [2024-12-06 13:22:44.468170] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:28:55.898 [2024-12-06 13:22:44.468276] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:28:56.157 13:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 104220 00:28:56.157 [2024-12-06 13:22:44.650218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.157 [2024-12-06 13:22:44.705525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:56.157 [2024-12-06 13:22:44.723972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.157 [2024-12-06 13:22:44.782172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:56.415 [2024-12-06 13:22:44.828754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.415 [2024-12-06 13:22:44.886027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:56.415 [2024-12-06 13:22:44.905139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.415 Running I/O for 1 seconds... 00:28:56.415 Running I/O for 1 seconds... 00:28:56.415 [2024-12-06 13:22:44.961679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:56.415 Running I/O for 1 seconds... 00:28:56.672 Running I/O for 1 seconds... 00:28:57.606 9872.00 IOPS, 38.56 MiB/s 00:28:57.606 Latency(us) 00:28:57.606 [2024-12-06T13:22:46.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.606 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:28:57.606 Nvme1n1 : 1.01 9918.95 38.75 0.00 0.00 12845.22 4319.42 16920.20 00:28:57.606 [2024-12-06T13:22:46.246Z] =================================================================================================================== 00:28:57.606 [2024-12-06T13:22:46.246Z] Total : 9918.95 38.75 0.00 0.00 12845.22 4319.42 16920.20 00:28:57.606 7872.00 IOPS, 30.75 MiB/s 00:28:57.606 Latency(us) 00:28:57.606 [2024-12-06T13:22:46.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.606 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:28:57.606 Nvme1n1 : 1.01 7950.50 31.06 0.00 0.00 16030.74 6911.07 26929.34 00:28:57.606 [2024-12-06T13:22:46.246Z] =================================================================================================================== 00:28:57.606 [2024-12-06T13:22:46.246Z] Total : 7950.50 31.06 0.00 0.00 16030.74 6911.07 26929.34 00:28:57.606 172840.00 IOPS, 675.16 MiB/s 00:28:57.606 Latency(us) 00:28:57.606 [2024-12-06T13:22:46.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.606 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:28:57.606 Nvme1n1 : 1.00 172507.64 673.86 0.00 0.00 737.88 305.34 1936.29 00:28:57.606 [2024-12-06T13:22:46.246Z] =================================================================================================================== 00:28:57.606 [2024-12-06T13:22:46.246Z] Total : 172507.64 673.86 0.00 0.00 737.88 305.34 1936.29 00:28:57.606 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 104222 00:28:57.606 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 104224 00:28:57.606 9178.00 IOPS, 35.85 MiB/s 00:28:57.606 Latency(us) 00:28:57.606 [2024-12-06T13:22:46.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.606 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:28:57.606 Nvme1n1 : 1.01 9271.67 36.22 0.00 0.00 13756.67 2621.44 20375.74 00:28:57.606 [2024-12-06T13:22:46.246Z] =================================================================================================================== 00:28:57.606 [2024-12-06T13:22:46.246Z] Total : 9271.67 36.22 0.00 0.00 13756.67 2621.44 20375.74 00:28:57.865 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 104227 00:28:57.865 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:57.865 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.865 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:57.865 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.865 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:28:57.865 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:28:57.865 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.865 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:28:57.865 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.865 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:28:57.865 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.865 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.865 rmmod nvme_tcp 00:28:57.865 rmmod nvme_fabrics 00:28:58.124 rmmod nvme_keyring 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 104181 ']' 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 104181 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 104181 ']' 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 104181 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104181 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104181' 00:28:58.124 killing process with pid 104181 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 104181 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 104181 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:58.124 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.383 13:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.643 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:28:58.643 00:28:58.643 real 0m3.833s 00:28:58.643 user 0m13.539s 00:28:58.643 sys 0m2.640s 00:28:58.643 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.643 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:58.643 ************************************ 00:28:58.643 END TEST nvmf_bdev_io_wait 00:28:58.643 ************************************ 00:28:58.643 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:58.643 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:58.643 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.643 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:58.643 ************************************ 00:28:58.643 START TEST nvmf_queue_depth 00:28:58.643 ************************************ 00:28:58.643 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:58.643 * Looking for test storage... 00:28:58.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:58.643 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:58.643 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:28:58.643 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:58.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.903 --rc genhtml_branch_coverage=1 00:28:58.903 --rc genhtml_function_coverage=1 00:28:58.903 --rc genhtml_legend=1 00:28:58.903 --rc geninfo_all_blocks=1 00:28:58.903 --rc geninfo_unexecuted_blocks=1 00:28:58.903 00:28:58.903 ' 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:58.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.903 --rc genhtml_branch_coverage=1 00:28:58.903 --rc genhtml_function_coverage=1 00:28:58.903 --rc genhtml_legend=1 00:28:58.903 --rc geninfo_all_blocks=1 00:28:58.903 --rc geninfo_unexecuted_blocks=1 00:28:58.903 00:28:58.903 ' 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:58.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.903 --rc genhtml_branch_coverage=1 00:28:58.903 --rc genhtml_function_coverage=1 00:28:58.903 --rc genhtml_legend=1 00:28:58.903 --rc geninfo_all_blocks=1 00:28:58.903 --rc geninfo_unexecuted_blocks=1 00:28:58.903 00:28:58.903 ' 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:58.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.903 --rc genhtml_branch_coverage=1 00:28:58.903 --rc genhtml_function_coverage=1 00:28:58.903 --rc genhtml_legend=1 00:28:58.903 --rc geninfo_all_blocks=1 00:28:58.903 --rc geninfo_unexecuted_blocks=1 00:28:58.903 00:28:58.903 ' 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.903 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:58.904 Cannot find device "nvmf_init_br" 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:58.904 Cannot find device "nvmf_init_br2" 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:58.904 Cannot find device "nvmf_tgt_br" 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:58.904 Cannot find device "nvmf_tgt_br2" 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:58.904 Cannot find device "nvmf_init_br" 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:58.904 Cannot find device "nvmf_init_br2" 00:28:58.904 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:58.905 Cannot find device "nvmf_tgt_br" 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:58.905 Cannot find device "nvmf_tgt_br2" 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:58.905 Cannot find device "nvmf_br" 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:58.905 Cannot find device "nvmf_init_if" 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:58.905 Cannot find device "nvmf_init_if2" 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:58.905 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:58.905 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:58.905 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:59.164 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:59.165 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:59.165 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:28:59.165 00:28:59.165 --- 10.0.0.3 ping statistics --- 00:28:59.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.165 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:59.165 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:59.165 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 00:28:59.165 00:28:59.165 --- 10.0.0.4 ping statistics --- 00:28:59.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.165 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:59.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:28:59.165 00:28:59.165 --- 10.0.0.1 ping statistics --- 00:28:59.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.165 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:59.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:28:59.165 00:28:59.165 --- 10.0.0.2 ping statistics --- 00:28:59.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.165 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=104490 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 104490 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 104490 ']' 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.165 13:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:59.425 [2024-12-06 13:22:47.850130] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:59.425 [2024-12-06 13:22:47.851561] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:28:59.425 [2024-12-06 13:22:47.851642] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.425 [2024-12-06 13:22:48.013800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.684 [2024-12-06 13:22:48.072350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.684 [2024-12-06 13:22:48.072413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.684 [2024-12-06 13:22:48.072427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.684 [2024-12-06 13:22:48.072437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.684 [2024-12-06 13:22:48.072447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.684 [2024-12-06 13:22:48.072905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.684 [2024-12-06 13:22:48.180295] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:59.684 [2024-12-06 13:22:48.180696] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:59.684 [2024-12-06 13:22:48.269911] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:59.684 Malloc0 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.684 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:59.943 [2024-12-06 13:22:48.337778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=104525 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 104525 /var/tmp/bdevperf.sock 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 104525 ']' 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:59.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.943 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:59.943 [2024-12-06 13:22:48.406792] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:28:59.944 [2024-12-06 13:22:48.406938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104525 ] 00:28:59.944 [2024-12-06 13:22:48.558343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.202 [2024-12-06 13:22:48.631975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.202 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.202 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:00.202 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:00.202 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.202 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:00.202 NVMe0n1 00:29:00.202 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.202 13:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:00.461 Running I/O for 10 seconds... 00:29:02.773 8406.00 IOPS, 32.84 MiB/s [2024-12-06T13:22:51.980Z] 8699.00 IOPS, 33.98 MiB/s [2024-12-06T13:22:53.387Z] 8679.33 IOPS, 33.90 MiB/s [2024-12-06T13:22:54.324Z] 8828.75 IOPS, 34.49 MiB/s [2024-12-06T13:22:55.261Z] 8922.80 IOPS, 34.85 MiB/s [2024-12-06T13:22:56.197Z] 9014.17 IOPS, 35.21 MiB/s [2024-12-06T13:22:57.134Z] 9051.71 IOPS, 35.36 MiB/s [2024-12-06T13:22:58.069Z] 8969.12 IOPS, 35.04 MiB/s [2024-12-06T13:22:59.002Z] 8898.33 IOPS, 34.76 MiB/s [2024-12-06T13:22:59.260Z] 8867.00 IOPS, 34.64 MiB/s 00:29:10.620 Latency(us) 00:29:10.620 [2024-12-06T13:22:59.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.620 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:10.620 Verification LBA range: start 0x0 length 0x4000 00:29:10.620 NVMe0n1 : 10.07 8886.45 34.71 0.00 0.00 114684.40 21328.99 78643.20 00:29:10.620 [2024-12-06T13:22:59.260Z] =================================================================================================================== 00:29:10.620 [2024-12-06T13:22:59.260Z] Total : 8886.45 34.71 0.00 0.00 114684.40 21328.99 78643.20 00:29:10.620 { 00:29:10.620 "results": [ 00:29:10.620 { 00:29:10.620 "job": "NVMe0n1", 00:29:10.620 "core_mask": "0x1", 00:29:10.620 "workload": "verify", 00:29:10.620 "status": "finished", 00:29:10.620 "verify_range": { 00:29:10.620 "start": 0, 00:29:10.620 "length": 16384 00:29:10.620 }, 00:29:10.620 "queue_depth": 1024, 00:29:10.620 "io_size": 4096, 00:29:10.620 "runtime": 10.073426, 00:29:10.620 "iops": 8886.450349662568, 00:29:10.620 "mibps": 34.712696678369404, 00:29:10.620 "io_failed": 0, 00:29:10.620 "io_timeout": 0, 00:29:10.620 "avg_latency_us": 114684.40003144146, 00:29:10.620 "min_latency_us": 21328.98909090909, 00:29:10.620 "max_latency_us": 78643.2 00:29:10.620 } 00:29:10.620 ], 00:29:10.620 "core_count": 1 00:29:10.620 } 00:29:10.620 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 104525 00:29:10.620 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 104525 ']' 00:29:10.620 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 104525 00:29:10.620 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:10.620 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.620 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104525 00:29:10.620 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:10.620 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:10.620 killing process with pid 104525 00:29:10.620 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104525' 00:29:10.620 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 104525 00:29:10.620 Received shutdown signal, test time was about 10.000000 seconds 00:29:10.621 00:29:10.621 Latency(us) 00:29:10.621 [2024-12-06T13:22:59.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.621 [2024-12-06T13:22:59.261Z] =================================================================================================================== 00:29:10.621 [2024-12-06T13:22:59.261Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.621 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 104525 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:10.879 rmmod nvme_tcp 00:29:10.879 rmmod nvme_fabrics 00:29:10.879 rmmod nvme_keyring 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 104490 ']' 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 104490 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 104490 ']' 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 104490 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104490 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:10.879 killing process with pid 104490 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104490' 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 104490 00:29:10.879 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 104490 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:11.138 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:29:11.396 00:29:11.396 real 0m12.841s 00:29:11.396 user 0m21.203s 00:29:11.396 sys 0m2.401s 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.396 ************************************ 00:29:11.396 END TEST nvmf_queue_depth 00:29:11.396 ************************************ 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:11.396 ************************************ 00:29:11.396 START TEST nvmf_target_multipath 00:29:11.396 ************************************ 00:29:11.396 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:11.656 * Looking for test storage... 00:29:11.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:11.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.656 --rc genhtml_branch_coverage=1 00:29:11.656 --rc genhtml_function_coverage=1 00:29:11.656 --rc genhtml_legend=1 00:29:11.656 --rc geninfo_all_blocks=1 00:29:11.656 --rc geninfo_unexecuted_blocks=1 00:29:11.656 00:29:11.656 ' 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:11.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.656 --rc genhtml_branch_coverage=1 00:29:11.656 --rc genhtml_function_coverage=1 00:29:11.656 --rc genhtml_legend=1 00:29:11.656 --rc geninfo_all_blocks=1 00:29:11.656 --rc geninfo_unexecuted_blocks=1 00:29:11.656 00:29:11.656 ' 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:11.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.656 --rc genhtml_branch_coverage=1 00:29:11.656 --rc genhtml_function_coverage=1 00:29:11.656 --rc genhtml_legend=1 00:29:11.656 --rc geninfo_all_blocks=1 00:29:11.656 --rc geninfo_unexecuted_blocks=1 00:29:11.656 00:29:11.656 ' 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:11.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.656 --rc genhtml_branch_coverage=1 00:29:11.656 --rc genhtml_function_coverage=1 00:29:11.656 --rc genhtml_legend=1 00:29:11.656 --rc geninfo_all_blocks=1 00:29:11.656 --rc geninfo_unexecuted_blocks=1 00:29:11.656 00:29:11.656 ' 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.656 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:11.657 Cannot find device "nvmf_init_br" 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:11.657 Cannot find device "nvmf_init_br2" 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:11.657 Cannot find device "nvmf_tgt_br" 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:11.657 Cannot find device "nvmf_tgt_br2" 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:11.657 Cannot find device "nvmf_init_br" 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:11.657 Cannot find device "nvmf_init_br2" 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:29:11.657 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:11.915 Cannot find device "nvmf_tgt_br" 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:11.915 Cannot find device "nvmf_tgt_br2" 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:11.915 Cannot find device "nvmf_br" 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:11.915 Cannot find device "nvmf_init_if" 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:11.915 Cannot find device "nvmf_init_if2" 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:11.915 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:11.915 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:11.915 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:11.916 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:12.173 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:12.173 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:29:12.173 00:29:12.173 --- 10.0.0.3 ping statistics --- 00:29:12.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.173 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:12.173 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:12.173 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:29:12.173 00:29:12.173 --- 10.0.0.4 ping statistics --- 00:29:12.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.173 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:12.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:29:12.173 00:29:12.173 --- 10.0.0.1 ping statistics --- 00:29:12.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.173 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:12.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:29:12.173 00:29:12.173 --- 10.0.0.2 ping statistics --- 00:29:12.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.173 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:12.173 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.174 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:12.174 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=104892 00:29:12.174 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:12.174 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 104892 00:29:12.174 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 104892 ']' 00:29:12.174 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.174 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.174 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.174 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.174 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:12.174 [2024-12-06 13:23:00.680542] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:12.174 [2024-12-06 13:23:00.681917] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:29:12.174 [2024-12-06 13:23:00.681988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.432 [2024-12-06 13:23:00.841779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:12.432 [2024-12-06 13:23:00.941385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.432 [2024-12-06 13:23:00.941499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.432 [2024-12-06 13:23:00.941525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.432 [2024-12-06 13:23:00.941543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.432 [2024-12-06 13:23:00.941559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.432 [2024-12-06 13:23:00.943151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.432 [2024-12-06 13:23:00.943245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:12.432 [2024-12-06 13:23:00.943297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:12.432 [2024-12-06 13:23:00.943303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.432 [2024-12-06 13:23:01.053425] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:12.432 [2024-12-06 13:23:01.053895] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:12.432 [2024-12-06 13:23:01.054235] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:12.432 [2024-12-06 13:23:01.054569] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:12.432 [2024-12-06 13:23:01.054996] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:12.689 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.690 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:29:12.690 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:12.690 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.690 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:12.690 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.690 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:12.948 [2024-12-06 13:23:01.424801] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.948 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:13.206 Malloc0 00:29:13.206 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:29:13.773 13:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:14.031 13:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:14.289 [2024-12-06 13:23:02.712775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:14.289 13:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:29:14.548 [2024-12-06 13:23:02.964651] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:29:14.548 13:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:29:14.548 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:29:14.806 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:29:14.806 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:29:14.806 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:14.806 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:29:14.806 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=105022 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:29:16.708 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:29:16.708 [global] 00:29:16.708 thread=1 00:29:16.708 invalidate=1 00:29:16.708 rw=randrw 00:29:16.708 time_based=1 00:29:16.708 runtime=6 00:29:16.708 ioengine=libaio 00:29:16.708 direct=1 00:29:16.708 bs=4096 00:29:16.708 iodepth=128 00:29:16.708 norandommap=0 00:29:16.708 numjobs=1 00:29:16.708 00:29:16.708 verify_dump=1 00:29:16.708 verify_backlog=512 00:29:16.708 verify_state_save=0 00:29:16.708 do_verify=1 00:29:16.708 verify=crc32c-intel 00:29:16.708 [job0] 00:29:16.708 filename=/dev/nvme0n1 00:29:16.708 Could not set queue depth (nvme0n1) 00:29:16.966 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:16.966 fio-3.35 00:29:16.966 Starting 1 thread 00:29:17.902 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:29:17.902 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:29:18.160 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:29:18.160 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:29:18.160 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:18.160 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:29:18.160 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:29:18.160 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:29:18.160 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:29:18.160 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:29:18.160 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:18.160 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:29:18.160 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:29:18.160 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:29:18.160 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:29:19.535 13:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:29:19.535 13:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:29:19.535 13:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:29:19.535 13:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:29:19.535 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:29:19.793 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:29:19.793 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:29:19.793 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:19.793 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:29:19.793 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:29:19.793 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:29:19.793 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:29:19.793 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:29:19.793 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:19.793 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:29:19.793 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:29:19.793 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:29:19.793 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:29:21.170 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:29:21.170 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:29:21.170 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:29:21.170 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 105022 00:29:23.073 00:29:23.073 job0: (groupid=0, jobs=1): err= 0: pid=105043: Fri Dec 6 13:23:11 2024 00:29:23.073 read: IOPS=11.0k, BW=42.8MiB/s (44.9MB/s)(257MiB/6007msec) 00:29:23.073 slat (usec): min=3, max=7470, avg=52.49, stdev=258.99 00:29:23.073 clat (usec): min=943, max=16042, avg=7778.80, stdev=1231.59 00:29:23.073 lat (usec): min=963, max=16056, avg=7831.29, stdev=1247.99 00:29:23.073 clat percentiles (usec): 00:29:23.073 | 1.00th=[ 4686], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7046], 00:29:23.073 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7898], 00:29:23.073 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[10028], 00:29:23.073 | 99.00th=[11600], 99.50th=[12125], 99.90th=[13042], 99.95th=[13829], 00:29:23.073 | 99.99th=[14484] 00:29:23.073 bw ( KiB/s): min= 7696, max=29864, per=52.58%, avg=23062.09, stdev=7046.53, samples=11 00:29:23.073 iops : min= 1924, max= 7466, avg=5765.45, stdev=1761.64, samples=11 00:29:23.073 write: IOPS=6706, BW=26.2MiB/s (27.5MB/s)(136MiB/5209msec); 0 zone resets 00:29:23.073 slat (usec): min=7, max=3869, avg=62.99, stdev=155.95 00:29:23.073 clat (usec): min=412, max=14051, avg=7143.68, stdev=998.95 00:29:23.073 lat (usec): min=967, max=14079, avg=7206.67, stdev=1002.39 00:29:23.073 clat percentiles (usec): 00:29:23.073 | 1.00th=[ 3949], 5.00th=[ 5473], 10.00th=[ 6259], 20.00th=[ 6652], 00:29:23.073 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7373], 00:29:23.073 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8356], 00:29:23.073 | 99.00th=[10421], 99.50th=[11207], 99.90th=[12387], 99.95th=[12780], 00:29:23.073 | 99.99th=[13173] 00:29:23.073 bw ( KiB/s): min= 8160, max=29560, per=86.12%, avg=23102.82, stdev=6773.56, samples=11 00:29:23.073 iops : min= 2040, max= 7390, avg=5775.64, stdev=1693.40, samples=11 00:29:23.073 lat (usec) : 500=0.01%, 1000=0.01% 00:29:23.073 lat (msec) : 2=0.04%, 4=0.61%, 10=95.52%, 20=3.84% 00:29:23.073 cpu : usr=5.11%, sys=22.38%, ctx=7785, majf=0, minf=127 00:29:23.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:29:23.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:23.073 issued rwts: total=65870,34932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:23.073 00:29:23.073 Run status group 0 (all jobs): 00:29:23.073 READ: bw=42.8MiB/s (44.9MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=257MiB (270MB), run=6007-6007msec 00:29:23.073 WRITE: bw=26.2MiB/s (27.5MB/s), 26.2MiB/s-26.2MiB/s (27.5MB/s-27.5MB/s), io=136MiB (143MB), run=5209-5209msec 00:29:23.073 00:29:23.073 Disk stats (read/write): 00:29:23.073 nvme0n1: ios=64926/34281, merge=0/0, ticks=474746/233607, in_queue=708353, util=98.65% 00:29:23.073 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:29:23.333 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:29:23.591 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:29:23.591 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:29:23.591 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:23.591 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:29:23.591 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:29:23.591 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:29:23.591 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:29:23.591 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:29:23.591 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:23.591 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:29:23.591 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:29:23.591 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:29:23.591 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:29:24.527 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:29:24.527 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:29:24.527 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:29:24.527 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:29:24.527 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=105166 00:29:24.527 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:29:24.527 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:29:24.527 [global] 00:29:24.527 thread=1 00:29:24.527 invalidate=1 00:29:24.527 rw=randrw 00:29:24.527 time_based=1 00:29:24.527 runtime=6 00:29:24.527 ioengine=libaio 00:29:24.527 direct=1 00:29:24.527 bs=4096 00:29:24.527 iodepth=128 00:29:24.527 norandommap=0 00:29:24.527 numjobs=1 00:29:24.527 00:29:24.527 verify_dump=1 00:29:24.527 verify_backlog=512 00:29:24.527 verify_state_save=0 00:29:24.527 do_verify=1 00:29:24.527 verify=crc32c-intel 00:29:24.527 [job0] 00:29:24.527 filename=/dev/nvme0n1 00:29:24.527 Could not set queue depth (nvme0n1) 00:29:24.785 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:24.785 fio-3.35 00:29:24.785 Starting 1 thread 00:29:25.719 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:29:25.977 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:29:26.236 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:29:26.236 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:29:26.236 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:26.236 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:29:26.236 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:29:26.236 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:29:26.236 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:29:26.236 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:29:26.236 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:26.236 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:29:26.236 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:29:26.236 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:29:26.236 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:29:27.169 13:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:29:27.169 13:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:29:27.169 13:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:29:27.169 13:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:29:27.426 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:29:27.989 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:29:27.989 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:29:27.989 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:27.989 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:29:27.989 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:29:27.989 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:29:27.989 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:29:27.989 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:29:27.989 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:27.989 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:29:27.989 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:29:27.989 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:29:27.989 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:29:28.935 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:29:28.935 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:29:28.935 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:29:28.935 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 105166 00:29:30.876 00:29:30.876 job0: (groupid=0, jobs=1): err= 0: pid=105193: Fri Dec 6 13:23:19 2024 00:29:30.876 read: IOPS=12.2k, BW=47.8MiB/s (50.1MB/s)(287MiB/6004msec) 00:29:30.876 slat (usec): min=3, max=7173, avg=40.64, stdev=222.95 00:29:30.876 clat (usec): min=284, max=46124, avg=7057.00, stdev=1721.19 00:29:30.876 lat (usec): min=305, max=46134, avg=7097.64, stdev=1739.30 00:29:30.876 clat percentiles (usec): 00:29:30.876 | 1.00th=[ 2835], 5.00th=[ 4146], 10.00th=[ 4817], 20.00th=[ 5735], 00:29:30.876 | 30.00th=[ 6521], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7504], 00:29:30.876 | 70.00th=[ 7832], 80.00th=[ 8160], 90.00th=[ 8848], 95.00th=[ 9634], 00:29:30.876 | 99.00th=[11207], 99.50th=[11863], 99.90th=[12649], 99.95th=[12911], 00:29:30.876 | 99.99th=[44827] 00:29:30.876 bw ( KiB/s): min= 6648, max=43096, per=53.96%, avg=26417.45, stdev=11788.87, samples=11 00:29:30.876 iops : min= 1662, max=10774, avg=6604.36, stdev=2947.22, samples=11 00:29:30.876 write: IOPS=7645, BW=29.9MiB/s (31.3MB/s)(154MiB/5158msec); 0 zone resets 00:29:30.876 slat (usec): min=12, max=3934, avg=52.47, stdev=118.82 00:29:30.876 clat (usec): min=148, max=13121, avg=6160.97, stdev=1589.73 00:29:30.876 lat (usec): min=230, max=13145, avg=6213.45, stdev=1601.90 00:29:30.876 clat percentiles (usec): 00:29:30.876 | 1.00th=[ 2507], 5.00th=[ 3326], 10.00th=[ 3818], 20.00th=[ 4555], 00:29:30.876 | 30.00th=[ 5276], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 6915], 00:29:30.876 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 8029], 00:29:30.876 | 99.00th=[ 9896], 99.50th=[10683], 99.90th=[11863], 99.95th=[12256], 00:29:30.876 | 99.99th=[13042] 00:29:30.876 bw ( KiB/s): min= 7088, max=42552, per=86.39%, avg=26420.36, stdev=11580.24, samples=11 00:29:30.876 iops : min= 1772, max=10638, avg=6605.09, stdev=2895.06, samples=11 00:29:30.876 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:29:30.876 lat (msec) : 2=0.21%, 4=6.85%, 10=90.30%, 20=2.60%, 50=0.01% 00:29:30.876 cpu : usr=6.03%, sys=26.52%, ctx=9311, majf=0, minf=127 00:29:30.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:29:30.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:30.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:30.876 issued rwts: total=73486,39436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:30.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:30.876 00:29:30.876 Run status group 0 (all jobs): 00:29:30.876 READ: bw=47.8MiB/s (50.1MB/s), 47.8MiB/s-47.8MiB/s (50.1MB/s-50.1MB/s), io=287MiB (301MB), run=6004-6004msec 00:29:30.876 WRITE: bw=29.9MiB/s (31.3MB/s), 29.9MiB/s-29.9MiB/s (31.3MB/s-31.3MB/s), io=154MiB (162MB), run=5158-5158msec 00:29:30.876 00:29:30.876 Disk stats (read/write): 00:29:30.876 nvme0n1: ios=72462/38937, merge=0/0, ticks=475482/224244, in_queue=699726, util=98.63% 00:29:30.876 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:31.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:29:31.134 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:31.134 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:29:31.134 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:31.134 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:31.134 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:31.134 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:31.134 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:29:31.134 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:31.392 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:29:31.392 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:29:31.392 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:29:31.392 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:29:31.392 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:31.392 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:31.392 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:31.392 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:31.392 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:31.392 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:31.392 rmmod nvme_tcp 00:29:31.650 rmmod nvme_fabrics 00:29:31.650 rmmod nvme_keyring 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 104892 ']' 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 104892 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 104892 ']' 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 104892 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104892 00:29:31.650 killing process with pid 104892 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104892' 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 104892 00:29:31.650 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 104892 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.910 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.169 ************************************ 00:29:32.170 END TEST nvmf_target_multipath 00:29:32.170 ************************************ 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:29:32.170 00:29:32.170 real 0m20.583s 00:29:32.170 user 1m10.320s 00:29:32.170 sys 0m10.220s 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:32.170 ************************************ 00:29:32.170 START TEST nvmf_zcopy 00:29:32.170 ************************************ 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:32.170 * Looking for test storage... 00:29:32.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.170 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:32.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.429 --rc genhtml_branch_coverage=1 00:29:32.429 --rc genhtml_function_coverage=1 00:29:32.429 --rc genhtml_legend=1 00:29:32.429 --rc geninfo_all_blocks=1 00:29:32.429 --rc geninfo_unexecuted_blocks=1 00:29:32.429 00:29:32.429 ' 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:32.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.429 --rc genhtml_branch_coverage=1 00:29:32.429 --rc genhtml_function_coverage=1 00:29:32.429 --rc genhtml_legend=1 00:29:32.429 --rc geninfo_all_blocks=1 00:29:32.429 --rc geninfo_unexecuted_blocks=1 00:29:32.429 00:29:32.429 ' 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:32.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.429 --rc genhtml_branch_coverage=1 00:29:32.429 --rc genhtml_function_coverage=1 00:29:32.429 --rc genhtml_legend=1 00:29:32.429 --rc geninfo_all_blocks=1 00:29:32.429 --rc geninfo_unexecuted_blocks=1 00:29:32.429 00:29:32.429 ' 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:32.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.429 --rc genhtml_branch_coverage=1 00:29:32.429 --rc genhtml_function_coverage=1 00:29:32.429 --rc genhtml_legend=1 00:29:32.429 --rc geninfo_all_blocks=1 00:29:32.429 --rc geninfo_unexecuted_blocks=1 00:29:32.429 00:29:32.429 ' 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.429 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:32.430 Cannot find device "nvmf_init_br" 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:32.430 Cannot find device "nvmf_init_br2" 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:32.430 Cannot find device "nvmf_tgt_br" 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:32.430 Cannot find device "nvmf_tgt_br2" 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:32.430 Cannot find device "nvmf_init_br" 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:32.430 Cannot find device "nvmf_init_br2" 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:29:32.430 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:32.430 Cannot find device "nvmf_tgt_br" 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:32.431 Cannot find device "nvmf_tgt_br2" 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:32.431 Cannot find device "nvmf_br" 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:32.431 Cannot find device "nvmf_init_if" 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:32.431 Cannot find device "nvmf_init_if2" 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:32.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:32.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:32.431 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:32.431 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:32.431 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:32.431 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:32.431 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:32.690 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:32.690 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:29:32.690 00:29:32.690 --- 10.0.0.3 ping statistics --- 00:29:32.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.690 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:32.690 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:32.690 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:29:32.690 00:29:32.690 --- 10.0.0.4 ping statistics --- 00:29:32.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.690 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:32.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:29:32.690 00:29:32.690 --- 10.0.0.1 ping statistics --- 00:29:32.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.690 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:32.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:29:32.690 00:29:32.690 --- 10.0.0.2 ping statistics --- 00:29:32.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.690 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=105516 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 105516 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 105516 ']' 00:29:32.690 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.691 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.691 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.691 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.691 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:32.949 [2024-12-06 13:23:21.338263] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:32.949 [2024-12-06 13:23:21.339334] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:29:32.949 [2024-12-06 13:23:21.339406] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.949 [2024-12-06 13:23:21.481743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.949 [2024-12-06 13:23:21.536399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.949 [2024-12-06 13:23:21.536509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.949 [2024-12-06 13:23:21.536522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.949 [2024-12-06 13:23:21.536534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.949 [2024-12-06 13:23:21.536542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.949 [2024-12-06 13:23:21.536949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.208 [2024-12-06 13:23:21.629160] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:33.208 [2024-12-06 13:23:21.629482] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:33.775 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.775 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:29:33.775 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.775 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.775 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:34.034 [2024-12-06 13:23:22.433753] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:34.034 [2024-12-06 13:23:22.449848] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:34.034 malloc0 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.034 { 00:29:34.034 "params": { 00:29:34.034 "name": "Nvme$subsystem", 00:29:34.034 "trtype": "$TEST_TRANSPORT", 00:29:34.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.034 "adrfam": "ipv4", 00:29:34.034 "trsvcid": "$NVMF_PORT", 00:29:34.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.034 "hdgst": ${hdgst:-false}, 00:29:34.034 "ddgst": ${ddgst:-false} 00:29:34.034 }, 00:29:34.034 "method": "bdev_nvme_attach_controller" 00:29:34.034 } 00:29:34.034 EOF 00:29:34.034 )") 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:34.034 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:34.034 "params": { 00:29:34.034 "name": "Nvme1", 00:29:34.034 "trtype": "tcp", 00:29:34.034 "traddr": "10.0.0.3", 00:29:34.034 "adrfam": "ipv4", 00:29:34.034 "trsvcid": "4420", 00:29:34.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.034 "hdgst": false, 00:29:34.034 "ddgst": false 00:29:34.034 }, 00:29:34.034 "method": "bdev_nvme_attach_controller" 00:29:34.034 }' 00:29:34.034 [2024-12-06 13:23:22.553735] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:29:34.034 [2024-12-06 13:23:22.553837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105573 ] 00:29:34.293 [2024-12-06 13:23:22.706838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.293 [2024-12-06 13:23:22.768646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.552 Running I/O for 10 seconds... 00:29:36.422 6120.00 IOPS, 47.81 MiB/s [2024-12-06T13:23:25.994Z] 6213.50 IOPS, 48.54 MiB/s [2024-12-06T13:23:27.369Z] 6234.67 IOPS, 48.71 MiB/s [2024-12-06T13:23:28.305Z] 6227.75 IOPS, 48.65 MiB/s [2024-12-06T13:23:29.242Z] 6251.40 IOPS, 48.84 MiB/s [2024-12-06T13:23:30.176Z] 6264.67 IOPS, 48.94 MiB/s [2024-12-06T13:23:31.113Z] 6276.57 IOPS, 49.04 MiB/s [2024-12-06T13:23:32.048Z] 6243.75 IOPS, 48.78 MiB/s [2024-12-06T13:23:33.002Z] 6237.56 IOPS, 48.73 MiB/s [2024-12-06T13:23:33.002Z] 6232.30 IOPS, 48.69 MiB/s 00:29:44.362 Latency(us) 00:29:44.362 [2024-12-06T13:23:33.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.362 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:29:44.362 Verification LBA range: start 0x0 length 0x1000 00:29:44.362 Nvme1n1 : 10.02 6233.55 48.70 0.00 0.00 20468.71 3068.28 27525.12 00:29:44.362 [2024-12-06T13:23:33.002Z] =================================================================================================================== 00:29:44.362 [2024-12-06T13:23:33.002Z] Total : 6233.55 48.70 0.00 0.00 20468.71 3068.28 27525.12 00:29:44.620 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=105681 00:29:44.620 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:29:44.620 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:29:44.620 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:44.620 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:29:44.620 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:44.620 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:44.620 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:44.620 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:44.620 { 00:29:44.620 "params": { 00:29:44.620 "name": "Nvme$subsystem", 00:29:44.620 "trtype": "$TEST_TRANSPORT", 00:29:44.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.620 "adrfam": "ipv4", 00:29:44.620 "trsvcid": "$NVMF_PORT", 00:29:44.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.620 "hdgst": ${hdgst:-false}, 00:29:44.620 "ddgst": ${ddgst:-false} 00:29:44.620 }, 00:29:44.620 "method": "bdev_nvme_attach_controller" 00:29:44.620 } 00:29:44.620 EOF 00:29:44.620 )") 00:29:44.620 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:44.620 [2024-12-06 13:23:33.181492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.620 [2024-12-06 13:23:33.181533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.620 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:44.620 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.620 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:44.620 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:44.620 "params": { 00:29:44.620 "name": "Nvme1", 00:29:44.620 "trtype": "tcp", 00:29:44.620 "traddr": "10.0.0.3", 00:29:44.620 "adrfam": "ipv4", 00:29:44.620 "trsvcid": "4420", 00:29:44.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:44.620 "hdgst": false, 00:29:44.620 "ddgst": false 00:29:44.620 }, 00:29:44.620 "method": "bdev_nvme_attach_controller" 00:29:44.620 }' 00:29:44.620 [2024-12-06 13:23:33.193437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.620 [2024-12-06 13:23:33.193474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.620 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.620 [2024-12-06 13:23:33.205444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.620 [2024-12-06 13:23:33.205495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.620 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.620 [2024-12-06 13:23:33.217432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.620 [2024-12-06 13:23:33.217467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.620 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.620 [2024-12-06 13:23:33.227757] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:29:44.620 [2024-12-06 13:23:33.227829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105681 ] 00:29:44.620 [2024-12-06 13:23:33.229430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.620 [2024-12-06 13:23:33.229462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.620 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.620 [2024-12-06 13:23:33.241470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.620 [2024-12-06 13:23:33.241492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.621 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.621 [2024-12-06 13:23:33.253546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.621 [2024-12-06 13:23:33.253568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.621 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.265464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.265486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.277421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.277442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.289423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.289444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.301425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.301448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.313424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.313448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.325473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.325494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.337421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.337442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.349429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.349462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.361429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.361464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.372817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.880 [2024-12-06 13:23:33.373430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.373463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.385447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.385486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.397443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.397496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.409426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.409448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.421445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.421476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.426087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.880 [2024-12-06 13:23:33.433431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.433466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.445467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.445493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.457446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.457483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.880 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.880 [2024-12-06 13:23:33.469488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.880 [2024-12-06 13:23:33.469518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.881 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.881 [2024-12-06 13:23:33.481479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.881 [2024-12-06 13:23:33.481507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.881 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.881 [2024-12-06 13:23:33.493464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.881 [2024-12-06 13:23:33.493491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.881 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:44.881 [2024-12-06 13:23:33.505461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.881 [2024-12-06 13:23:33.505507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.881 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.139 [2024-12-06 13:23:33.517490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.139 [2024-12-06 13:23:33.517520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.139 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.139 [2024-12-06 13:23:33.529456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.139 [2024-12-06 13:23:33.529503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.139 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.139 [2024-12-06 13:23:33.541486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.139 [2024-12-06 13:23:33.541514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.139 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.139 [2024-12-06 13:23:33.553476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.139 [2024-12-06 13:23:33.553501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.139 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.139 [2024-12-06 13:23:33.565486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.139 [2024-12-06 13:23:33.565520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.139 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.139 [2024-12-06 13:23:33.577521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.139 [2024-12-06 13:23:33.577547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.139 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.139 [2024-12-06 13:23:33.589433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.139 [2024-12-06 13:23:33.589469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.139 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.139 [2024-12-06 13:23:33.601444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.139 [2024-12-06 13:23:33.601488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.139 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.139 Running I/O for 5 seconds... 00:29:45.139 [2024-12-06 13:23:33.617502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.139 [2024-12-06 13:23:33.617542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.139 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.139 [2024-12-06 13:23:33.627711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.139 [2024-12-06 13:23:33.627744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.139 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.139 [2024-12-06 13:23:33.641050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.139 [2024-12-06 13:23:33.641087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.139 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.139 [2024-12-06 13:23:33.650938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.139 [2024-12-06 13:23:33.650969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.139 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.139 [2024-12-06 13:23:33.666682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.140 [2024-12-06 13:23:33.666711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.140 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.140 [2024-12-06 13:23:33.685717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.140 [2024-12-06 13:23:33.685748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.140 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.140 [2024-12-06 13:23:33.695389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.140 [2024-12-06 13:23:33.695423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.140 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.140 [2024-12-06 13:23:33.710479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.140 [2024-12-06 13:23:33.710536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.140 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.140 [2024-12-06 13:23:33.730243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.140 [2024-12-06 13:23:33.730275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.140 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.140 [2024-12-06 13:23:33.748766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.140 [2024-12-06 13:23:33.748796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.140 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.140 [2024-12-06 13:23:33.769501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.140 [2024-12-06 13:23:33.769531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.140 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.397 [2024-12-06 13:23:33.779877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.397 [2024-12-06 13:23:33.779911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:33.792671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:33.792701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:33.806779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:33.806811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:33.825945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:33.825977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:33.845096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:33.845143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:33.865602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:33.865631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:33.875432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:33.875475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:33.889834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:33.889877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:33.909120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:33.909165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:33.929601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:33.929641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:33.940119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:33.940167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:33.960910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:33.960954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:33.981643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:33.981688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:33.992069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:33.992102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:34.004829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:34.004859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:34.014432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:34.014490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.398 [2024-12-06 13:23:34.030132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.398 [2024-12-06 13:23:34.030163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.398 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.656 [2024-12-06 13:23:34.049375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.656 [2024-12-06 13:23:34.049420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.656 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.656 [2024-12-06 13:23:34.059910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.656 [2024-12-06 13:23:34.059955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.072998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.073043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.082483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.082528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.097771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.097800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.107146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.107192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.121504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.121532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.130983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.131028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.146387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.146434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.165318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.165367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.174617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.174660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.189346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.189396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.199026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.199070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.215364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.215401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.232558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.232611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.244991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.245026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.254704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.254738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.270060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.270215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.657 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.657 [2024-12-06 13:23:34.289849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.657 [2024-12-06 13:23:34.290014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.308584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.308752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.329313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.329516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.340073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.340245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.359667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.359841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.373924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.374075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.393383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.393584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.403916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.403950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.423339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.423374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.438970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.439004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.457630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.457663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.467266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.467299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.482586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.482619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.501942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.501979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.522336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.522388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.537432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.537511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:45.916 [2024-12-06 13:23:34.546721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.916 [2024-12-06 13:23:34.546756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.916 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.174 [2024-12-06 13:23:34.562217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.562284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 [2024-12-06 13:23:34.582088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.582125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 [2024-12-06 13:23:34.600261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.600312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 11913.00 IOPS, 93.07 MiB/s [2024-12-06T13:23:34.815Z] [2024-12-06 13:23:34.615443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.615508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 [2024-12-06 13:23:34.633406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.633464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 [2024-12-06 13:23:34.643099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.643133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 [2024-12-06 13:23:34.658535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.658570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 [2024-12-06 13:23:34.677518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.677567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 [2024-12-06 13:23:34.687409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.687469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 [2024-12-06 13:23:34.702229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.702264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 [2024-12-06 13:23:34.721946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.721982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 [2024-12-06 13:23:34.742030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.742066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 [2024-12-06 13:23:34.761163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.761224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 [2024-12-06 13:23:34.782174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.782210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.175 [2024-12-06 13:23:34.800300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.175 [2024-12-06 13:23:34.800351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.175 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.450 [2024-12-06 13:23:34.815711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.450 [2024-12-06 13:23:34.815745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.450 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.450 [2024-12-06 13:23:34.833043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.450 [2024-12-06 13:23:34.833094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.450 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.450 [2024-12-06 13:23:34.854216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.450 [2024-12-06 13:23:34.854251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.450 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.450 [2024-12-06 13:23:34.873343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.450 [2024-12-06 13:23:34.873381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.450 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.450 [2024-12-06 13:23:34.884097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.450 [2024-12-06 13:23:34.884131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.450 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.450 [2024-12-06 13:23:34.898490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.450 [2024-12-06 13:23:34.898523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.450 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.450 [2024-12-06 13:23:34.917911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.450 [2024-12-06 13:23:34.917946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.450 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.450 [2024-12-06 13:23:34.937968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.450 [2024-12-06 13:23:34.938002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.450 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.450 [2024-12-06 13:23:34.958218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.450 [2024-12-06 13:23:34.958253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.450 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.450 [2024-12-06 13:23:34.976861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.450 [2024-12-06 13:23:34.976897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.450 2024/12/06 13:23:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.450 [2024-12-06 13:23:34.998206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.451 [2024-12-06 13:23:34.998242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.451 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.451 [2024-12-06 13:23:35.013663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.451 [2024-12-06 13:23:35.013714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.451 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.451 [2024-12-06 13:23:35.033964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.451 [2024-12-06 13:23:35.033999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.451 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.451 [2024-12-06 13:23:35.049196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.451 [2024-12-06 13:23:35.049256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.451 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.451 [2024-12-06 13:23:35.069571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.451 [2024-12-06 13:23:35.069619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.451 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.451 [2024-12-06 13:23:35.079724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.451 [2024-12-06 13:23:35.079761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.451 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.093218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.709 [2024-12-06 13:23:35.093267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.709 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.114219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.709 [2024-12-06 13:23:35.114269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.709 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.129427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.709 [2024-12-06 13:23:35.129489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.709 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.138674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.709 [2024-12-06 13:23:35.138721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.709 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.154926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.709 [2024-12-06 13:23:35.154961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.709 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.174258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.709 [2024-12-06 13:23:35.174308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.709 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.193015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.709 [2024-12-06 13:23:35.193064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.709 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.213318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.709 [2024-12-06 13:23:35.213372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.709 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.223991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.709 [2024-12-06 13:23:35.224040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.709 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.237682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.709 [2024-12-06 13:23:35.237731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.709 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.258161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.709 [2024-12-06 13:23:35.258196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.709 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.273279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.709 [2024-12-06 13:23:35.273329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.709 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.282805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.709 [2024-12-06 13:23:35.282841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.709 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.709 [2024-12-06 13:23:35.299102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.710 [2024-12-06 13:23:35.299137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.710 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.710 [2024-12-06 13:23:35.318415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.710 [2024-12-06 13:23:35.318482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.710 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.710 [2024-12-06 13:23:35.336274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.710 [2024-12-06 13:23:35.336478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.710 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.349965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.350002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.369254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.369288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.389350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.389384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.399323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.399356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.413203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.413236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.422604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.422635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.438587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.438619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.458131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.458167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.476646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.476678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.488708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.488747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.502479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.502560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.521365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.521406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.531376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.531409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.546741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.546775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.565498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.565532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.575742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.575917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:46.968 [2024-12-06 13:23:35.591804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.968 [2024-12-06 13:23:35.591839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.968 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.607499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.607561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 11918.50 IOPS, 93.11 MiB/s [2024-12-06T13:23:35.866Z] 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.623768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.623803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.639100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.639263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.657442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.657518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.667959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.667991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.681552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.681584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.691038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.691067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.705718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.705764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.715070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.715102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.728958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.729004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.738549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.738581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.754062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.754094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.774222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.774269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.790173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.790203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.809918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.809949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.829028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.829061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.226 [2024-12-06 13:23:35.850153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.226 [2024-12-06 13:23:35.850184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.226 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.484 [2024-12-06 13:23:35.869331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.484 [2024-12-06 13:23:35.869381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.484 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.484 [2024-12-06 13:23:35.879252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.484 [2024-12-06 13:23:35.879285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.484 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.484 [2024-12-06 13:23:35.894525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.484 [2024-12-06 13:23:35.894572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.484 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:35.914609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:35.914655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.485 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:35.931841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:35.931871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.485 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:35.947323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:35.947354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.485 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:35.965966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:35.965997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.485 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:35.986265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:35.986296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.485 2024/12/06 13:23:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:36.002455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:36.002513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.485 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:36.029525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:36.029557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.485 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:36.039234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:36.039280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.485 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:36.050977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:36.051007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.485 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:36.065853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:36.065912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.485 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:36.085552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:36.085598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.485 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:36.095705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:36.095750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.485 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:36.108861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:36.108892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.485 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.485 [2024-12-06 13:23:36.118667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.485 [2024-12-06 13:23:36.118698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.743 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.743 [2024-12-06 13:23:36.133696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.743 [2024-12-06 13:23:36.133725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.743 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.743 [2024-12-06 13:23:36.142988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.743 [2024-12-06 13:23:36.143018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.743 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.743 [2024-12-06 13:23:36.158967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.743 [2024-12-06 13:23:36.158997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.743 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.743 [2024-12-06 13:23:36.176775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.743 [2024-12-06 13:23:36.176807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.743 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.743 [2024-12-06 13:23:36.188904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.743 [2024-12-06 13:23:36.188936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.743 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.743 [2024-12-06 13:23:36.202610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.743 [2024-12-06 13:23:36.202640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.743 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.743 [2024-12-06 13:23:36.222720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.743 [2024-12-06 13:23:36.222766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.743 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.743 [2024-12-06 13:23:36.237967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.743 [2024-12-06 13:23:36.237998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.743 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.743 [2024-12-06 13:23:36.258731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.743 [2024-12-06 13:23:36.258793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.743 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.743 [2024-12-06 13:23:36.273634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.743 [2024-12-06 13:23:36.273665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.743 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.743 [2024-12-06 13:23:36.283190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.743 [2024-12-06 13:23:36.283223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.744 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.744 [2024-12-06 13:23:36.298640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.744 [2024-12-06 13:23:36.298675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.744 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.744 [2024-12-06 13:23:36.316778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.744 [2024-12-06 13:23:36.316943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.744 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.744 [2024-12-06 13:23:36.326722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.744 [2024-12-06 13:23:36.326884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.744 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.744 [2024-12-06 13:23:36.341756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.744 [2024-12-06 13:23:36.341910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.744 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.744 [2024-12-06 13:23:36.351213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.744 [2024-12-06 13:23:36.351403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.744 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:47.744 [2024-12-06 13:23:36.366571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.744 [2024-12-06 13:23:36.366714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.744 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.002 [2024-12-06 13:23:36.385362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.002 [2024-12-06 13:23:36.385553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.002 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.002 [2024-12-06 13:23:36.395171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.002 [2024-12-06 13:23:36.395207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.002 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.002 [2024-12-06 13:23:36.410648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.002 [2024-12-06 13:23:36.410699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.002 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.002 [2024-12-06 13:23:36.428763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.002 [2024-12-06 13:23:36.428800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 [2024-12-06 13:23:36.438594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.438740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 [2024-12-06 13:23:36.454770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.454928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 [2024-12-06 13:23:36.472731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.472782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 [2024-12-06 13:23:36.482447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.482525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 [2024-12-06 13:23:36.498630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.498663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 [2024-12-06 13:23:36.517115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.517304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 [2024-12-06 13:23:36.527068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.527255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 [2024-12-06 13:23:36.543141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.543313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 [2024-12-06 13:23:36.560912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.560947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 [2024-12-06 13:23:36.570707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.570877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 [2024-12-06 13:23:36.587210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.587383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 [2024-12-06 13:23:36.605296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.605496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 11877.00 IOPS, 92.79 MiB/s [2024-12-06T13:23:36.643Z] [2024-12-06 13:23:36.615662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.615834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.003 [2024-12-06 13:23:36.629987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.003 [2024-12-06 13:23:36.630131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.003 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.262 [2024-12-06 13:23:36.650306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-06 13:23:36.650495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.262 [2024-12-06 13:23:36.666584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-06 13:23:36.666730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.262 [2024-12-06 13:23:36.685631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-06 13:23:36.685776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.262 [2024-12-06 13:23:36.695095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-06 13:23:36.695267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.262 [2024-12-06 13:23:36.710290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-06 13:23:36.710495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.262 [2024-12-06 13:23:36.729741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-06 13:23:36.729894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.262 [2024-12-06 13:23:36.740656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-06 13:23:36.740828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.262 [2024-12-06 13:23:36.751920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-06 13:23:36.752077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.262 [2024-12-06 13:23:36.765474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-06 13:23:36.765508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.262 [2024-12-06 13:23:36.775150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-06 13:23:36.775202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.262 [2024-12-06 13:23:36.791246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-06 13:23:36.791296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.262 [2024-12-06 13:23:36.809047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-06 13:23:36.809105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.262 [2024-12-06 13:23:36.818801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-06 13:23:36.818836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.263 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.263 [2024-12-06 13:23:36.834825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-06 13:23:36.834860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.263 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.263 [2024-12-06 13:23:36.852726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-06 13:23:36.852778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.263 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.263 [2024-12-06 13:23:36.862412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-06 13:23:36.862471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.263 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.263 [2024-12-06 13:23:36.879005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-06 13:23:36.879041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.263 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.263 [2024-12-06 13:23:36.898170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-06 13:23:36.898206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.521 [2024-12-06 13:23:36.917604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-06 13:23:36.917654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:36.927776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:36.927811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:36.941238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:36.941290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:36.951178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:36.951231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:36.966870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:36.966904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:36.985636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:36.985670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:36.995932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:36.995983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:37.009588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:37.009624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:37.019250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:37.019301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:37.034188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:37.034238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:37.054550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:37.054602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:37.070532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:37.070566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:37.089782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:37.089817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:37.099302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:37.099353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:37.114118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:37.114153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:37.134152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:37.134188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.522 [2024-12-06 13:23:37.153051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.522 [2024-12-06 13:23:37.153086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.522 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.174047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.174083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.191855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.191890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.209245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.209280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.230090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.230127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.247391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.247426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.264879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.264914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.278959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.278994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.297503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.297537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.307332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.307368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.322388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.322423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.341986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.342021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.352184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.352219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.365130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.365166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.374745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.374780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.390528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.390563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:48.782 [2024-12-06 13:23:37.409681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.782 [2024-12-06 13:23:37.409711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.782 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.041 [2024-12-06 13:23:37.419388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.041 [2024-12-06 13:23:37.419419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.041 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.041 [2024-12-06 13:23:37.434333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.041 [2024-12-06 13:23:37.434381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.041 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.041 [2024-12-06 13:23:37.453241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.041 [2024-12-06 13:23:37.453288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.041 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.041 [2024-12-06 13:23:37.462727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.041 [2024-12-06 13:23:37.462758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.041 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.041 [2024-12-06 13:23:37.478523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.041 [2024-12-06 13:23:37.478554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.041 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.041 [2024-12-06 13:23:37.497920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.041 [2024-12-06 13:23:37.497951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.041 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.041 [2024-12-06 13:23:37.518161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.041 [2024-12-06 13:23:37.518193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.041 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.041 [2024-12-06 13:23:37.533769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.041 [2024-12-06 13:23:37.533800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.042 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.042 [2024-12-06 13:23:37.542937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.042 [2024-12-06 13:23:37.542971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.042 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.042 [2024-12-06 13:23:37.558415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.042 [2024-12-06 13:23:37.558446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.042 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.042 [2024-12-06 13:23:37.578389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.042 [2024-12-06 13:23:37.578437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.042 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.042 [2024-12-06 13:23:37.595944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.042 [2024-12-06 13:23:37.595974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.042 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.042 11857.50 IOPS, 92.64 MiB/s [2024-12-06T13:23:37.682Z] [2024-12-06 13:23:37.611199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.042 [2024-12-06 13:23:37.611245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.042 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.042 [2024-12-06 13:23:37.629874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.042 [2024-12-06 13:23:37.629903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.042 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.042 [2024-12-06 13:23:37.649504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.042 [2024-12-06 13:23:37.649536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.042 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.042 [2024-12-06 13:23:37.659495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.042 [2024-12-06 13:23:37.659539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.042 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.042 [2024-12-06 13:23:37.674510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.042 [2024-12-06 13:23:37.674534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.042 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.693734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.693765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.703303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.703351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.717993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.718023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.738008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.738039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.753934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.753965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.774344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.774390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.789694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.789727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.809751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.809785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.820186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.820219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.834445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.834520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.854432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.854506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.871612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.871641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.887735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.887783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.301 [2024-12-06 13:23:37.905568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.301 [2024-12-06 13:23:37.905614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.301 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.302 [2024-12-06 13:23:37.914734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.302 [2024-12-06 13:23:37.914796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.302 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.302 [2024-12-06 13:23:37.930348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.302 [2024-12-06 13:23:37.930395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.302 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:37.939551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:37.939596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:37.955243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:37.955290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:37.973794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:37.973826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:37.983947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:37.983978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:37.997049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:37.997095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:38.006472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:38.006540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:38.022532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:38.022590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:38.042297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:38.042343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:38.062464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:38.062521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:38.078012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:38.078046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:38.098096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:38.098138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:38.117998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:38.118028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:38.133597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:38.133641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:38.143371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:38.143403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:38.158839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:38.158884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:38.174319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:38.174365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.561 [2024-12-06 13:23:38.193639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.561 [2024-12-06 13:23:38.193696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.561 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.821 [2024-12-06 13:23:38.203441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.821 [2024-12-06 13:23:38.203498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.821 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.821 [2024-12-06 13:23:38.217958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.821 [2024-12-06 13:23:38.217990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.821 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.821 [2024-12-06 13:23:38.238354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.821 [2024-12-06 13:23:38.238398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.821 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.821 [2024-12-06 13:23:38.254200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.821 [2024-12-06 13:23:38.254262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.821 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.821 [2024-12-06 13:23:38.274057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.821 [2024-12-06 13:23:38.274088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.821 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.821 [2024-12-06 13:23:38.292398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.821 [2024-12-06 13:23:38.292444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.821 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.821 [2024-12-06 13:23:38.306727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.821 [2024-12-06 13:23:38.306790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.821 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.821 [2024-12-06 13:23:38.324956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.821 [2024-12-06 13:23:38.325002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.821 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.821 [2024-12-06 13:23:38.334718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.821 [2024-12-06 13:23:38.334763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.821 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.821 [2024-12-06 13:23:38.350509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.821 [2024-12-06 13:23:38.350567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.821 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.822 [2024-12-06 13:23:38.360056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.822 [2024-12-06 13:23:38.360101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.822 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.822 [2024-12-06 13:23:38.375003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.822 [2024-12-06 13:23:38.375050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.822 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.822 [2024-12-06 13:23:38.393013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.822 [2024-12-06 13:23:38.393059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.822 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.822 [2024-12-06 13:23:38.402858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.822 [2024-12-06 13:23:38.402908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.822 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.822 [2024-12-06 13:23:38.418664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.822 [2024-12-06 13:23:38.418715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.822 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.822 [2024-12-06 13:23:38.436954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.822 [2024-12-06 13:23:38.437107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.822 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:49.822 [2024-12-06 13:23:38.447483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.822 [2024-12-06 13:23:38.447643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.822 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.083 [2024-12-06 13:23:38.463157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.083 [2024-12-06 13:23:38.463333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.083 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.083 [2024-12-06 13:23:38.479593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.083 [2024-12-06 13:23:38.479766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.083 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.083 [2024-12-06 13:23:38.497517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.083 [2024-12-06 13:23:38.497552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.083 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.083 [2024-12-06 13:23:38.507479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.083 [2024-12-06 13:23:38.507524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.083 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.083 [2024-12-06 13:23:38.522915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.083 [2024-12-06 13:23:38.523062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.083 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.083 [2024-12-06 13:23:38.541551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.083 [2024-12-06 13:23:38.541695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.083 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.083 [2024-12-06 13:23:38.551324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.083 [2024-12-06 13:23:38.551509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.083 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.083 [2024-12-06 13:23:38.566963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.083 [2024-12-06 13:23:38.567108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.083 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.083 [2024-12-06 13:23:38.585182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.083 [2024-12-06 13:23:38.585347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.083 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.083 [2024-12-06 13:23:38.595493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.083 [2024-12-06 13:23:38.595650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.083 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.083 11835.00 IOPS, 92.46 MiB/s [2024-12-06T13:23:38.723Z] [2024-12-06 13:23:38.609273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.084 [2024-12-06 13:23:38.609422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.084 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.084 [2024-12-06 13:23:38.617723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.084 [2024-12-06 13:23:38.617877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.084 00:29:50.084 Latency(us) 00:29:50.084 [2024-12-06T13:23:38.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.084 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:50.084 Nvme1n1 : 5.01 11835.17 92.46 0.00 0.00 10801.99 2666.12 17992.61 00:29:50.084 [2024-12-06T13:23:38.724Z] =================================================================================================================== 00:29:50.084 [2024-12-06T13:23:38.724Z] Total : 11835.17 92.46 0.00 0.00 10801.99 2666.12 17992.61 00:29:50.084 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.084 [2024-12-06 13:23:38.629443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.084 [2024-12-06 13:23:38.629490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.084 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.084 [2024-12-06 13:23:38.641506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.084 [2024-12-06 13:23:38.641540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.084 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.084 [2024-12-06 13:23:38.653513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.084 [2024-12-06 13:23:38.653691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.084 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.084 [2024-12-06 13:23:38.665486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.084 [2024-12-06 13:23:38.665650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.084 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.084 [2024-12-06 13:23:38.677508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.084 [2024-12-06 13:23:38.677671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.084 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.084 [2024-12-06 13:23:38.689487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.084 [2024-12-06 13:23:38.689656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.084 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.084 [2024-12-06 13:23:38.701484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.084 [2024-12-06 13:23:38.701658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.084 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.084 [2024-12-06 13:23:38.713491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.084 [2024-12-06 13:23:38.713706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.084 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.353 [2024-12-06 13:23:38.725496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.353 [2024-12-06 13:23:38.725653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.353 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.353 [2024-12-06 13:23:38.737486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.353 [2024-12-06 13:23:38.737656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.353 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.353 [2024-12-06 13:23:38.749479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.353 [2024-12-06 13:23:38.749648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.353 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.353 [2024-12-06 13:23:38.761466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.353 [2024-12-06 13:23:38.761603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.353 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.353 [2024-12-06 13:23:38.769431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.353 [2024-12-06 13:23:38.769577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.353 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.353 [2024-12-06 13:23:38.781480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.353 [2024-12-06 13:23:38.781525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.353 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.353 [2024-12-06 13:23:38.793472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.353 [2024-12-06 13:23:38.793516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.353 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.353 [2024-12-06 13:23:38.805434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.353 [2024-12-06 13:23:38.805474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.353 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.353 [2024-12-06 13:23:38.817494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.353 [2024-12-06 13:23:38.817648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.353 2024/12/06 13:23:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:50.353 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (105681) - No such process 00:29:50.353 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 105681 00:29:50.353 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.354 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.354 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:50.354 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.354 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:50.354 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.354 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:50.354 delay0 00:29:50.354 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.354 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:50.354 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.354 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:50.354 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.354 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:29:50.612 [2024-12-06 13:23:39.017751] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:57.176 Initializing NVMe Controllers 00:29:57.176 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.176 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:57.176 Initialization complete. Launching workers. 00:29:57.176 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 141 00:29:57.176 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 428, failed to submit 33 00:29:57.176 success 340, unsuccessful 88, failed 0 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:57.176 rmmod nvme_tcp 00:29:57.176 rmmod nvme_fabrics 00:29:57.176 rmmod nvme_keyring 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 105516 ']' 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 105516 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 105516 ']' 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 105516 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105516 00:29:57.176 killing process with pid 105516 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105516' 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 105516 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 105516 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:29:57.176 ************************************ 00:29:57.176 END TEST nvmf_zcopy 00:29:57.176 ************************************ 00:29:57.176 00:29:57.176 real 0m25.044s 00:29:57.176 user 0m38.544s 00:29:57.176 sys 0m7.567s 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:57.176 ************************************ 00:29:57.176 START TEST nvmf_nmic 00:29:57.176 ************************************ 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:57.176 * Looking for test storage... 00:29:57.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:57.176 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:57.435 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:57.435 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:29:57.435 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:57.435 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.435 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.435 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.435 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.435 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.435 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:57.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.436 --rc genhtml_branch_coverage=1 00:29:57.436 --rc genhtml_function_coverage=1 00:29:57.436 --rc genhtml_legend=1 00:29:57.436 --rc geninfo_all_blocks=1 00:29:57.436 --rc geninfo_unexecuted_blocks=1 00:29:57.436 00:29:57.436 ' 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:57.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.436 --rc genhtml_branch_coverage=1 00:29:57.436 --rc genhtml_function_coverage=1 00:29:57.436 --rc genhtml_legend=1 00:29:57.436 --rc geninfo_all_blocks=1 00:29:57.436 --rc geninfo_unexecuted_blocks=1 00:29:57.436 00:29:57.436 ' 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:57.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.436 --rc genhtml_branch_coverage=1 00:29:57.436 --rc genhtml_function_coverage=1 00:29:57.436 --rc genhtml_legend=1 00:29:57.436 --rc geninfo_all_blocks=1 00:29:57.436 --rc geninfo_unexecuted_blocks=1 00:29:57.436 00:29:57.436 ' 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:57.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.436 --rc genhtml_branch_coverage=1 00:29:57.436 --rc genhtml_function_coverage=1 00:29:57.436 --rc genhtml_legend=1 00:29:57.436 --rc geninfo_all_blocks=1 00:29:57.436 --rc geninfo_unexecuted_blocks=1 00:29:57.436 00:29:57.436 ' 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:57.436 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:57.437 Cannot find device "nvmf_init_br" 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:57.437 Cannot find device "nvmf_init_br2" 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:57.437 Cannot find device "nvmf_tgt_br" 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:57.437 Cannot find device "nvmf_tgt_br2" 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:57.437 Cannot find device "nvmf_init_br" 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:29:57.437 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:57.437 Cannot find device "nvmf_init_br2" 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:57.437 Cannot find device "nvmf_tgt_br" 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:57.437 Cannot find device "nvmf_tgt_br2" 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:57.437 Cannot find device "nvmf_br" 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:57.437 Cannot find device "nvmf_init_if" 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:57.437 Cannot find device "nvmf_init_if2" 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:57.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:57.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:29:57.437 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:57.695 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:57.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:57.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:29:57.696 00:29:57.696 --- 10.0.0.3 ping statistics --- 00:29:57.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.696 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:57.696 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:57.696 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:29:57.696 00:29:57.696 --- 10.0.0.4 ping statistics --- 00:29:57.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.696 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:57.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:29:57.696 00:29:57.696 --- 10.0.0.1 ping statistics --- 00:29:57.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.696 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:57.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:29:57.696 00:29:57.696 --- 10.0.0.2 ping statistics --- 00:29:57.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.696 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=106056 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 106056 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 106056 ']' 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.696 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:57.966 [2024-12-06 13:23:46.396403] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:57.966 [2024-12-06 13:23:46.397703] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:29:57.966 [2024-12-06 13:23:46.397786] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.966 [2024-12-06 13:23:46.553124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:58.239 [2024-12-06 13:23:46.623322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.240 [2024-12-06 13:23:46.623637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.240 [2024-12-06 13:23:46.623744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.240 [2024-12-06 13:23:46.623835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.240 [2024-12-06 13:23:46.623951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.240 [2024-12-06 13:23:46.625291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.240 [2024-12-06 13:23:46.625362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:58.240 [2024-12-06 13:23:46.625494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:58.240 [2024-12-06 13:23:46.625469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.240 [2024-12-06 13:23:46.725339] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:58.240 [2024-12-06 13:23:46.726054] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:58.240 [2024-12-06 13:23:46.726615] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:58.240 [2024-12-06 13:23:46.726281] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:58.240 [2024-12-06 13:23:46.727110] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.174 [2024-12-06 13:23:47.514790] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.174 Malloc0 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.174 [2024-12-06 13:23:47.598958] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.174 test case1: single bdev can't be used in multiple subsystems 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.174 [2024-12-06 13:23:47.622527] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:29:59.174 [2024-12-06 13:23:47.622594] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:29:59.174 [2024-12-06 13:23:47.622609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:59.174 2024/12/06 13:23:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:29:59.174 request: 00:29:59.174 { 00:29:59.174 "method": "nvmf_subsystem_add_ns", 00:29:59.174 "params": { 00:29:59.174 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:29:59.174 "namespace": { 00:29:59.174 "bdev_name": "Malloc0", 00:29:59.174 "no_auto_visible": false, 00:29:59.174 "hide_metadata": false 00:29:59.174 } 00:29:59.174 } 00:29:59.174 } 00:29:59.174 Got JSON-RPC error response 00:29:59.174 GoRPCClient: error on JSON-RPC call 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:29:59.174 Adding namespace failed - expected result. 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:29:59.174 test case2: host connect to nvmf target in multiple paths 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.174 [2024-12-06 13:23:47.634624] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:29:59.174 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:30:01.704 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:01.704 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:01.704 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:01.704 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:01.704 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:01.704 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:30:01.704 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:01.704 [global] 00:30:01.704 thread=1 00:30:01.704 invalidate=1 00:30:01.704 rw=write 00:30:01.704 time_based=1 00:30:01.704 runtime=1 00:30:01.704 ioengine=libaio 00:30:01.704 direct=1 00:30:01.704 bs=4096 00:30:01.704 iodepth=1 00:30:01.704 norandommap=0 00:30:01.704 numjobs=1 00:30:01.704 00:30:01.704 verify_dump=1 00:30:01.704 verify_backlog=512 00:30:01.704 verify_state_save=0 00:30:01.704 do_verify=1 00:30:01.704 verify=crc32c-intel 00:30:01.704 [job0] 00:30:01.704 filename=/dev/nvme0n1 00:30:01.704 Could not set queue depth (nvme0n1) 00:30:01.704 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:01.704 fio-3.35 00:30:01.704 Starting 1 thread 00:30:02.635 00:30:02.635 job0: (groupid=0, jobs=1): err= 0: pid=106162: Fri Dec 6 13:23:51 2024 00:30:02.635 read: IOPS=2390, BW=9562KiB/s (9792kB/s)(9572KiB/1001msec) 00:30:02.635 slat (nsec): min=12690, max=82511, avg=19153.93, stdev=7225.19 00:30:02.635 clat (usec): min=162, max=928, avg=212.21, stdev=27.34 00:30:02.635 lat (usec): min=175, max=953, avg=231.36, stdev=28.38 00:30:02.635 clat percentiles (usec): 00:30:02.635 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:30:02.635 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 215], 00:30:02.635 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 253], 00:30:02.635 | 99.00th=[ 281], 99.50th=[ 302], 99.90th=[ 338], 99.95th=[ 392], 00:30:02.635 | 99.99th=[ 930] 00:30:02.635 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:30:02.635 slat (nsec): min=14737, max=85376, avg=25882.47, stdev=8165.22 00:30:02.635 clat (usec): min=106, max=303, avg=144.80, stdev=18.40 00:30:02.635 lat (usec): min=125, max=332, avg=170.68, stdev=20.69 00:30:02.635 clat percentiles (usec): 00:30:02.635 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 131], 00:30:02.635 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:30:02.635 | 70.00th=[ 151], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 180], 00:30:02.635 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 277], 99.95th=[ 285], 00:30:02.635 | 99.99th=[ 306] 00:30:02.635 bw ( KiB/s): min=12000, max=12000, per=100.00%, avg=12000.00, stdev= 0.00, samples=1 00:30:02.635 iops : min= 3000, max= 3000, avg=3000.00, stdev= 0.00, samples=1 00:30:02.635 lat (usec) : 250=97.01%, 500=2.97%, 1000=0.02% 00:30:02.635 cpu : usr=2.30%, sys=8.00%, ctx=4953, majf=0, minf=5 00:30:02.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:02.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.635 issued rwts: total=2393,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:02.635 00:30:02.635 Run status group 0 (all jobs): 00:30:02.635 READ: bw=9562KiB/s (9792kB/s), 9562KiB/s-9562KiB/s (9792kB/s-9792kB/s), io=9572KiB (9802kB), run=1001-1001msec 00:30:02.635 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:30:02.635 00:30:02.635 Disk stats (read/write): 00:30:02.635 nvme0n1: ios=2097/2399, merge=0/0, ticks=474/381, in_queue=855, util=91.26% 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:02.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:02.635 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:02.635 rmmod nvme_tcp 00:30:02.893 rmmod nvme_fabrics 00:30:02.893 rmmod nvme_keyring 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 106056 ']' 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 106056 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 106056 ']' 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 106056 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106056 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:02.893 killing process with pid 106056 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106056' 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 106056 00:30:02.893 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 106056 00:30:03.150 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:03.150 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:03.150 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:03.150 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:03.150 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:30:03.150 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:03.151 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:30:03.408 00:30:03.408 real 0m6.122s 00:30:03.408 user 0m15.107s 00:30:03.408 sys 0m1.974s 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:03.408 ************************************ 00:30:03.408 END TEST nvmf_nmic 00:30:03.408 ************************************ 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:03.408 ************************************ 00:30:03.408 START TEST nvmf_fio_target 00:30:03.408 ************************************ 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:03.408 * Looking for test storage... 00:30:03.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:30:03.408 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:03.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.667 --rc genhtml_branch_coverage=1 00:30:03.667 --rc genhtml_function_coverage=1 00:30:03.667 --rc genhtml_legend=1 00:30:03.667 --rc geninfo_all_blocks=1 00:30:03.667 --rc geninfo_unexecuted_blocks=1 00:30:03.667 00:30:03.667 ' 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:03.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.667 --rc genhtml_branch_coverage=1 00:30:03.667 --rc genhtml_function_coverage=1 00:30:03.667 --rc genhtml_legend=1 00:30:03.667 --rc geninfo_all_blocks=1 00:30:03.667 --rc geninfo_unexecuted_blocks=1 00:30:03.667 00:30:03.667 ' 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:03.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.667 --rc genhtml_branch_coverage=1 00:30:03.667 --rc genhtml_function_coverage=1 00:30:03.667 --rc genhtml_legend=1 00:30:03.667 --rc geninfo_all_blocks=1 00:30:03.667 --rc geninfo_unexecuted_blocks=1 00:30:03.667 00:30:03.667 ' 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:03.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.667 --rc genhtml_branch_coverage=1 00:30:03.667 --rc genhtml_function_coverage=1 00:30:03.667 --rc genhtml_legend=1 00:30:03.667 --rc geninfo_all_blocks=1 00:30:03.667 --rc geninfo_unexecuted_blocks=1 00:30:03.667 00:30:03.667 ' 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.667 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:03.668 Cannot find device "nvmf_init_br" 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:03.668 Cannot find device "nvmf_init_br2" 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:03.668 Cannot find device "nvmf_tgt_br" 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:03.668 Cannot find device "nvmf_tgt_br2" 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:03.668 Cannot find device "nvmf_init_br" 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:03.668 Cannot find device "nvmf_init_br2" 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:03.668 Cannot find device "nvmf_tgt_br" 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:03.668 Cannot find device "nvmf_tgt_br2" 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:03.668 Cannot find device "nvmf_br" 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:03.668 Cannot find device "nvmf_init_if" 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:03.668 Cannot find device "nvmf_init_if2" 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:03.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:03.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:03.668 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:03.926 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:03.926 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:03.926 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:30:03.926 00:30:03.926 --- 10.0.0.3 ping statistics --- 00:30:03.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.927 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:03.927 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:03.927 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:30:03.927 00:30:03.927 --- 10.0.0.4 ping statistics --- 00:30:03.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.927 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:03.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:30:03.927 00:30:03.927 --- 10.0.0.1 ping statistics --- 00:30:03.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.927 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:03.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:30:03.927 00:30:03.927 --- 10.0.0.2 ping statistics --- 00:30:03.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.927 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.927 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:04.184 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=106392 00:30:04.184 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:04.184 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 106392 00:30:04.184 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 106392 ']' 00:30:04.184 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.184 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:04.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.184 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.184 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:04.184 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:04.184 [2024-12-06 13:23:52.637423] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:04.184 [2024-12-06 13:23:52.638875] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:30:04.184 [2024-12-06 13:23:52.638988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.184 [2024-12-06 13:23:52.794200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:04.445 [2024-12-06 13:23:52.867166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.445 [2024-12-06 13:23:52.867236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.445 [2024-12-06 13:23:52.867258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.445 [2024-12-06 13:23:52.867269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.445 [2024-12-06 13:23:52.867278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.445 [2024-12-06 13:23:52.868531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.445 [2024-12-06 13:23:52.868611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.445 [2024-12-06 13:23:52.868742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:04.445 [2024-12-06 13:23:52.868749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.445 [2024-12-06 13:23:52.968963] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:04.445 [2024-12-06 13:23:52.969577] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:04.445 [2024-12-06 13:23:52.969721] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:04.445 [2024-12-06 13:23:52.969768] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:04.445 [2024-12-06 13:23:52.970013] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:05.014 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.014 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:30:05.014 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.014 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.014 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:05.271 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.271 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:05.271 [2024-12-06 13:23:53.902724] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.529 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:05.787 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:05.787 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:06.046 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:06.046 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:06.305 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:06.305 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:06.871 13:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:06.871 13:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:06.871 13:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:07.437 13:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:07.437 13:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:07.437 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:07.437 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:08.004 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:08.004 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:08.004 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:08.263 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:08.263 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:08.830 13:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:08.830 13:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:09.088 13:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:09.347 [2024-12-06 13:23:57.726779] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:09.347 13:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:09.606 13:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:09.864 13:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:09.864 13:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:09.864 13:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:30:09.864 13:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:09.864 13:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:30:09.864 13:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:30:09.864 13:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:30:11.767 13:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:11.767 13:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:11.767 13:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:11.767 13:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:30:11.767 13:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:11.767 13:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:30:11.767 13:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:11.767 [global] 00:30:11.767 thread=1 00:30:11.767 invalidate=1 00:30:11.767 rw=write 00:30:11.767 time_based=1 00:30:11.768 runtime=1 00:30:11.768 ioengine=libaio 00:30:11.768 direct=1 00:30:11.768 bs=4096 00:30:11.768 iodepth=1 00:30:11.768 norandommap=0 00:30:11.768 numjobs=1 00:30:11.768 00:30:11.768 verify_dump=1 00:30:11.768 verify_backlog=512 00:30:11.768 verify_state_save=0 00:30:11.768 do_verify=1 00:30:11.768 verify=crc32c-intel 00:30:11.768 [job0] 00:30:11.768 filename=/dev/nvme0n1 00:30:11.768 [job1] 00:30:11.768 filename=/dev/nvme0n2 00:30:11.768 [job2] 00:30:11.768 filename=/dev/nvme0n3 00:30:11.768 [job3] 00:30:11.768 filename=/dev/nvme0n4 00:30:12.085 Could not set queue depth (nvme0n1) 00:30:12.085 Could not set queue depth (nvme0n2) 00:30:12.085 Could not set queue depth (nvme0n3) 00:30:12.085 Could not set queue depth (nvme0n4) 00:30:12.085 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:12.085 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:12.085 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:12.085 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:12.085 fio-3.35 00:30:12.085 Starting 4 threads 00:30:13.458 00:30:13.458 job0: (groupid=0, jobs=1): err= 0: pid=106687: Fri Dec 6 13:24:01 2024 00:30:13.458 read: IOPS=1911, BW=7644KiB/s (7828kB/s)(7652KiB/1001msec) 00:30:13.458 slat (nsec): min=13438, max=95912, avg=18883.64, stdev=5845.07 00:30:13.458 clat (usec): min=165, max=554, avg=259.04, stdev=30.53 00:30:13.458 lat (usec): min=201, max=571, avg=277.93, stdev=31.13 00:30:13.458 clat percentiles (usec): 00:30:13.458 | 1.00th=[ 198], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 233], 00:30:13.458 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 265], 00:30:13.458 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 310], 00:30:13.458 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 465], 99.95th=[ 553], 00:30:13.458 | 99.99th=[ 553] 00:30:13.458 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:30:13.458 slat (nsec): min=18638, max=83809, avg=26167.74, stdev=7328.04 00:30:13.458 clat (usec): min=129, max=2876, avg=198.37, stdev=66.12 00:30:13.458 lat (usec): min=154, max=2922, avg=224.54, stdev=66.79 00:30:13.458 clat percentiles (usec): 00:30:13.458 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 172], 00:30:13.458 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 196], 60.00th=[ 204], 00:30:13.458 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 237], 95.00th=[ 247], 00:30:13.458 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 326], 99.95th=[ 330], 00:30:13.458 | 99.99th=[ 2868] 00:30:13.458 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:30:13.458 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:13.458 lat (usec) : 250=68.69%, 500=31.25%, 750=0.03% 00:30:13.458 lat (msec) : 4=0.03% 00:30:13.458 cpu : usr=2.50%, sys=6.00%, ctx=3971, majf=0, minf=3 00:30:13.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.458 issued rwts: total=1913,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:13.458 job1: (groupid=0, jobs=1): err= 0: pid=106688: Fri Dec 6 13:24:01 2024 00:30:13.458 read: IOPS=1049, BW=4200KiB/s (4301kB/s)(4204KiB/1001msec) 00:30:13.458 slat (nsec): min=11267, max=28974, avg=14750.93, stdev=2957.90 00:30:13.458 clat (usec): min=225, max=796, avg=431.95, stdev=84.26 00:30:13.458 lat (usec): min=246, max=812, avg=446.70, stdev=84.79 00:30:13.458 clat percentiles (usec): 00:30:13.458 | 1.00th=[ 247], 5.00th=[ 273], 10.00th=[ 302], 20.00th=[ 388], 00:30:13.458 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 441], 00:30:13.458 | 70.00th=[ 457], 80.00th=[ 482], 90.00th=[ 553], 95.00th=[ 594], 00:30:13.458 | 99.00th=[ 635], 99.50th=[ 644], 99.90th=[ 750], 99.95th=[ 799], 00:30:13.458 | 99.99th=[ 799] 00:30:13.458 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:30:13.458 slat (usec): min=15, max=112, avg=28.45, stdev= 7.42 00:30:13.458 clat (usec): min=145, max=3329, avg=313.64, stdev=131.93 00:30:13.458 lat (usec): min=177, max=3375, avg=342.08, stdev=132.13 00:30:13.458 clat percentiles (usec): 00:30:13.458 | 1.00th=[ 186], 5.00th=[ 227], 10.00th=[ 243], 20.00th=[ 262], 00:30:13.458 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 314], 00:30:13.458 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 383], 95.00th=[ 400], 00:30:13.458 | 99.00th=[ 465], 99.50th=[ 553], 99.90th=[ 2212], 99.95th=[ 3326], 00:30:13.458 | 99.99th=[ 3326] 00:30:13.458 bw ( KiB/s): min= 6624, max= 6624, per=23.13%, avg=6624.00, stdev= 0.00, samples=1 00:30:13.458 iops : min= 1656, max= 1656, avg=1656.00, stdev= 0.00, samples=1 00:30:13.458 lat (usec) : 250=8.12%, 500=84.73%, 750=6.84%, 1000=0.04% 00:30:13.458 lat (msec) : 2=0.19%, 4=0.08% 00:30:13.458 cpu : usr=1.30%, sys=4.50%, ctx=2589, majf=0, minf=15 00:30:13.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.458 issued rwts: total=1051,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:13.458 job2: (groupid=0, jobs=1): err= 0: pid=106689: Fri Dec 6 13:24:01 2024 00:30:13.458 read: IOPS=1788, BW=7153KiB/s (7325kB/s)(7160KiB/1001msec) 00:30:13.458 slat (nsec): min=13266, max=55188, avg=18145.61, stdev=3980.89 00:30:13.458 clat (usec): min=182, max=646, avg=256.94, stdev=32.52 00:30:13.458 lat (usec): min=196, max=663, avg=275.08, stdev=33.52 00:30:13.458 clat percentiles (usec): 00:30:13.458 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 227], 00:30:13.458 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 260], 60.00th=[ 265], 00:30:13.458 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 310], 00:30:13.458 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 371], 99.95th=[ 644], 00:30:13.458 | 99.99th=[ 644] 00:30:13.458 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:30:13.458 slat (usec): min=19, max=115, avg=27.73, stdev= 8.58 00:30:13.458 clat (usec): min=130, max=3635, avg=216.55, stdev=87.13 00:30:13.458 lat (usec): min=157, max=3678, avg=244.28, stdev=89.93 00:30:13.458 clat percentiles (usec): 00:30:13.458 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 180], 00:30:13.458 | 30.00th=[ 192], 40.00th=[ 202], 50.00th=[ 210], 60.00th=[ 223], 00:30:13.458 | 70.00th=[ 233], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 285], 00:30:13.458 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 619], 99.95th=[ 873], 00:30:13.458 | 99.99th=[ 3621] 00:30:13.458 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:30:13.458 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:13.458 lat (usec) : 250=62.92%, 500=36.92%, 750=0.10%, 1000=0.03% 00:30:13.458 lat (msec) : 4=0.03% 00:30:13.458 cpu : usr=1.70%, sys=6.60%, ctx=3838, majf=0, minf=5 00:30:13.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.458 issued rwts: total=1790,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:13.458 job3: (groupid=0, jobs=1): err= 0: pid=106690: Fri Dec 6 13:24:01 2024 00:30:13.458 read: IOPS=1090, BW=4364KiB/s (4468kB/s)(4368KiB/1001msec) 00:30:13.458 slat (nsec): min=11287, max=67860, avg=15813.44, stdev=4931.15 00:30:13.458 clat (usec): min=237, max=740, avg=427.18, stdev=76.26 00:30:13.458 lat (usec): min=251, max=754, avg=442.99, stdev=76.44 00:30:13.458 clat percentiles (usec): 00:30:13.458 | 1.00th=[ 255], 5.00th=[ 289], 10.00th=[ 314], 20.00th=[ 388], 00:30:13.458 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 437], 00:30:13.458 | 70.00th=[ 449], 80.00th=[ 474], 90.00th=[ 529], 95.00th=[ 562], 00:30:13.458 | 99.00th=[ 627], 99.50th=[ 644], 99.90th=[ 734], 99.95th=[ 742], 00:30:13.458 | 99.99th=[ 742] 00:30:13.458 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:30:13.458 slat (nsec): min=14288, max=79909, avg=29513.61, stdev=12296.54 00:30:13.458 clat (usec): min=189, max=1650, avg=303.33, stdev=66.87 00:30:13.458 lat (usec): min=221, max=1673, avg=332.85, stdev=61.24 00:30:13.458 clat percentiles (usec): 00:30:13.458 | 1.00th=[ 206], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 249], 00:30:13.458 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 306], 00:30:13.458 | 70.00th=[ 334], 80.00th=[ 359], 90.00th=[ 388], 95.00th=[ 404], 00:30:13.458 | 99.00th=[ 437], 99.50th=[ 445], 99.90th=[ 586], 99.95th=[ 1647], 00:30:13.458 | 99.99th=[ 1647] 00:30:13.458 bw ( KiB/s): min= 7048, max= 7048, per=24.61%, avg=7048.00, stdev= 0.00, samples=1 00:30:13.458 iops : min= 1762, max= 1762, avg=1762.00, stdev= 0.00, samples=1 00:30:13.458 lat (usec) : 250=12.18%, 500=81.85%, 750=5.94% 00:30:13.458 lat (msec) : 2=0.04% 00:30:13.458 cpu : usr=1.20%, sys=4.70%, ctx=2640, majf=0, minf=15 00:30:13.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.458 issued rwts: total=1092,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:13.458 00:30:13.458 Run status group 0 (all jobs): 00:30:13.458 READ: bw=22.8MiB/s (23.9MB/s), 4200KiB/s-7644KiB/s (4301kB/s-7828kB/s), io=22.8MiB (23.9MB), run=1001-1001msec 00:30:13.458 WRITE: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:30:13.458 00:30:13.458 Disk stats (read/write): 00:30:13.458 nvme0n1: ios=1586/1942, merge=0/0, ticks=441/411, in_queue=852, util=89.38% 00:30:13.458 nvme0n2: ios=1073/1213, merge=0/0, ticks=472/373, in_queue=845, util=89.10% 00:30:13.458 nvme0n3: ios=1563/1772, merge=0/0, ticks=429/414, in_queue=843, util=89.76% 00:30:13.458 nvme0n4: ios=1054/1267, merge=0/0, ticks=440/389, in_queue=829, util=90.64% 00:30:13.458 13:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:13.458 [global] 00:30:13.458 thread=1 00:30:13.458 invalidate=1 00:30:13.458 rw=randwrite 00:30:13.458 time_based=1 00:30:13.458 runtime=1 00:30:13.458 ioengine=libaio 00:30:13.458 direct=1 00:30:13.458 bs=4096 00:30:13.458 iodepth=1 00:30:13.458 norandommap=0 00:30:13.458 numjobs=1 00:30:13.458 00:30:13.458 verify_dump=1 00:30:13.458 verify_backlog=512 00:30:13.458 verify_state_save=0 00:30:13.458 do_verify=1 00:30:13.458 verify=crc32c-intel 00:30:13.458 [job0] 00:30:13.458 filename=/dev/nvme0n1 00:30:13.458 [job1] 00:30:13.458 filename=/dev/nvme0n2 00:30:13.458 [job2] 00:30:13.458 filename=/dev/nvme0n3 00:30:13.458 [job3] 00:30:13.458 filename=/dev/nvme0n4 00:30:13.458 Could not set queue depth (nvme0n1) 00:30:13.458 Could not set queue depth (nvme0n2) 00:30:13.458 Could not set queue depth (nvme0n3) 00:30:13.459 Could not set queue depth (nvme0n4) 00:30:13.459 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:13.459 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:13.459 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:13.459 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:13.459 fio-3.35 00:30:13.459 Starting 4 threads 00:30:14.835 00:30:14.835 job0: (groupid=0, jobs=1): err= 0: pid=106743: Fri Dec 6 13:24:03 2024 00:30:14.835 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:14.835 slat (nsec): min=13272, max=50428, avg=15680.30, stdev=3199.13 00:30:14.835 clat (usec): min=203, max=2658, avg=241.46, stdev=57.93 00:30:14.835 lat (usec): min=217, max=2691, avg=257.14, stdev=58.30 00:30:14.835 clat percentiles (usec): 00:30:14.835 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 227], 00:30:14.835 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:30:14.835 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 269], 00:30:14.835 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 586], 99.95th=[ 889], 00:30:14.835 | 99.99th=[ 2671] 00:30:14.835 write: IOPS=2433, BW=9734KiB/s (9968kB/s)(9744KiB/1001msec); 0 zone resets 00:30:14.835 slat (nsec): min=17909, max=91467, avg=22208.59, stdev=4837.02 00:30:14.835 clat (usec): min=135, max=2292, avg=169.14, stdev=47.06 00:30:14.835 lat (usec): min=154, max=2315, avg=191.35, stdev=47.47 00:30:14.835 clat percentiles (usec): 00:30:14.835 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:30:14.835 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:30:14.835 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:30:14.835 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 562], 99.95th=[ 562], 00:30:14.835 | 99.99th=[ 2278] 00:30:14.835 bw ( KiB/s): min= 9584, max= 9584, per=26.88%, avg=9584.00, stdev= 0.00, samples=1 00:30:14.835 iops : min= 2396, max= 2396, avg=2396.00, stdev= 0.00, samples=1 00:30:14.835 lat (usec) : 250=89.70%, 500=10.15%, 750=0.09%, 1000=0.02% 00:30:14.835 lat (msec) : 4=0.04% 00:30:14.835 cpu : usr=1.70%, sys=6.40%, ctx=4484, majf=0, minf=11 00:30:14.835 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.835 issued rwts: total=2048,2436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.835 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:14.835 job1: (groupid=0, jobs=1): err= 0: pid=106744: Fri Dec 6 13:24:03 2024 00:30:14.835 read: IOPS=1536, BW=6144KiB/s (6291kB/s)(6144KiB/1000msec) 00:30:14.835 slat (nsec): min=10598, max=50862, avg=14816.82, stdev=3742.04 00:30:14.835 clat (usec): min=177, max=590, avg=315.10, stdev=53.27 00:30:14.835 lat (usec): min=188, max=605, avg=329.91, stdev=53.76 00:30:14.835 clat percentiles (usec): 00:30:14.835 | 1.00th=[ 188], 5.00th=[ 204], 10.00th=[ 262], 20.00th=[ 289], 00:30:14.835 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 322], 00:30:14.835 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 392], 95.00th=[ 416], 00:30:14.835 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 545], 99.95th=[ 594], 00:30:14.836 | 99.99th=[ 594] 00:30:14.836 write: IOPS=1877, BW=7508KiB/s (7688kB/s)(7508KiB/1000msec); 0 zone resets 00:30:14.836 slat (nsec): min=14715, max=97253, avg=23017.33, stdev=7295.10 00:30:14.836 clat (usec): min=131, max=4147, avg=236.37, stdev=146.33 00:30:14.836 lat (usec): min=147, max=4203, avg=259.39, stdev=147.51 00:30:14.836 clat percentiles (usec): 00:30:14.836 | 1.00th=[ 153], 5.00th=[ 167], 10.00th=[ 180], 20.00th=[ 206], 00:30:14.836 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:30:14.836 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:30:14.836 | 99.00th=[ 306], 99.50th=[ 347], 99.90th=[ 3556], 99.95th=[ 4146], 00:30:14.836 | 99.99th=[ 4146] 00:30:14.836 bw ( KiB/s): min= 8192, max= 8192, per=22.98%, avg=8192.00, stdev= 0.00, samples=1 00:30:14.836 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:14.836 lat (usec) : 250=44.65%, 500=55.05%, 750=0.15% 00:30:14.836 lat (msec) : 2=0.03%, 4=0.09%, 10=0.03% 00:30:14.836 cpu : usr=1.70%, sys=4.90%, ctx=3413, majf=0, minf=15 00:30:14.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.836 issued rwts: total=1536,1877,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:14.836 job2: (groupid=0, jobs=1): err= 0: pid=106745: Fri Dec 6 13:24:03 2024 00:30:14.836 read: IOPS=1662, BW=6649KiB/s (6809kB/s)(6656KiB/1001msec) 00:30:14.836 slat (nsec): min=10837, max=63564, avg=14933.65, stdev=3817.70 00:30:14.836 clat (usec): min=192, max=657, avg=297.20, stdev=56.42 00:30:14.836 lat (usec): min=205, max=672, avg=312.13, stdev=56.59 00:30:14.836 clat percentiles (usec): 00:30:14.836 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 229], 00:30:14.836 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:30:14.836 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 367], 95.00th=[ 400], 00:30:14.836 | 99.00th=[ 429], 99.50th=[ 445], 99.90th=[ 570], 99.95th=[ 660], 00:30:14.836 | 99.99th=[ 660] 00:30:14.836 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:30:14.836 slat (usec): min=14, max=104, avg=20.72, stdev= 5.66 00:30:14.836 clat (usec): min=129, max=771, avg=211.10, stdev=44.97 00:30:14.836 lat (usec): min=159, max=792, avg=231.82, stdev=44.82 00:30:14.836 clat percentiles (usec): 00:30:14.836 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 165], 00:30:14.836 | 30.00th=[ 176], 40.00th=[ 192], 50.00th=[ 215], 60.00th=[ 225], 00:30:14.836 | 70.00th=[ 239], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 277], 00:30:14.836 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 453], 99.95th=[ 474], 00:30:14.836 | 99.99th=[ 775] 00:30:14.836 bw ( KiB/s): min= 8192, max= 8192, per=22.98%, avg=8192.00, stdev= 0.00, samples=1 00:30:14.836 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:14.836 lat (usec) : 250=53.99%, 500=45.88%, 750=0.11%, 1000=0.03% 00:30:14.836 cpu : usr=1.20%, sys=5.40%, ctx=3714, majf=0, minf=10 00:30:14.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.836 issued rwts: total=1664,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:14.836 job3: (groupid=0, jobs=1): err= 0: pid=106746: Fri Dec 6 13:24:03 2024 00:30:14.836 read: IOPS=2054, BW=8220KiB/s (8417kB/s)(8228KiB/1001msec) 00:30:14.836 slat (nsec): min=13332, max=70125, avg=15305.53, stdev=3196.05 00:30:14.836 clat (usec): min=186, max=641, avg=223.30, stdev=26.74 00:30:14.836 lat (usec): min=201, max=657, avg=238.61, stdev=27.00 00:30:14.836 clat percentiles (usec): 00:30:14.836 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:30:14.836 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:30:14.836 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 269], 00:30:14.836 | 99.00th=[ 326], 99.50th=[ 367], 99.90th=[ 441], 99.95th=[ 586], 00:30:14.836 | 99.99th=[ 644] 00:30:14.836 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:30:14.836 slat (nsec): min=18749, max=82918, avg=22680.24, stdev=5109.98 00:30:14.836 clat (usec): min=133, max=572, avg=173.12, stdev=32.12 00:30:14.836 lat (usec): min=154, max=599, avg=195.80, stdev=34.89 00:30:14.836 clat percentiles (usec): 00:30:14.836 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:30:14.836 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:30:14.836 | 70.00th=[ 172], 80.00th=[ 184], 90.00th=[ 231], 95.00th=[ 241], 00:30:14.836 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 306], 99.95th=[ 515], 00:30:14.836 | 99.99th=[ 570] 00:30:14.836 bw ( KiB/s): min= 9352, max= 9352, per=26.23%, avg=9352.00, stdev= 0.00, samples=1 00:30:14.836 iops : min= 2338, max= 2338, avg=2338.00, stdev= 0.00, samples=1 00:30:14.836 lat (usec) : 250=95.71%, 500=4.20%, 750=0.09% 00:30:14.836 cpu : usr=0.70%, sys=7.60%, ctx=4617, majf=0, minf=14 00:30:14.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.836 issued rwts: total=2057,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:14.836 00:30:14.836 Run status group 0 (all jobs): 00:30:14.836 READ: bw=28.5MiB/s (29.9MB/s), 6144KiB/s-8220KiB/s (6291kB/s-8417kB/s), io=28.5MiB (29.9MB), run=1000-1001msec 00:30:14.836 WRITE: bw=34.8MiB/s (36.5MB/s), 7508KiB/s-9.99MiB/s (7688kB/s-10.5MB/s), io=34.8MiB (36.5MB), run=1000-1001msec 00:30:14.836 00:30:14.836 Disk stats (read/write): 00:30:14.836 nvme0n1: ios=1887/2048, merge=0/0, ticks=488/360, in_queue=848, util=89.48% 00:30:14.836 nvme0n2: ios=1461/1536, merge=0/0, ticks=474/368, in_queue=842, util=88.70% 00:30:14.836 nvme0n3: ios=1563/1712, merge=0/0, ticks=479/361, in_queue=840, util=89.78% 00:30:14.836 nvme0n4: ios=1929/2048, merge=0/0, ticks=463/383, in_queue=846, util=90.12% 00:30:14.836 13:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:14.836 [global] 00:30:14.836 thread=1 00:30:14.836 invalidate=1 00:30:14.836 rw=write 00:30:14.836 time_based=1 00:30:14.836 runtime=1 00:30:14.836 ioengine=libaio 00:30:14.836 direct=1 00:30:14.836 bs=4096 00:30:14.836 iodepth=128 00:30:14.836 norandommap=0 00:30:14.836 numjobs=1 00:30:14.836 00:30:14.836 verify_dump=1 00:30:14.836 verify_backlog=512 00:30:14.836 verify_state_save=0 00:30:14.836 do_verify=1 00:30:14.836 verify=crc32c-intel 00:30:14.836 [job0] 00:30:14.836 filename=/dev/nvme0n1 00:30:14.836 [job1] 00:30:14.836 filename=/dev/nvme0n2 00:30:14.836 [job2] 00:30:14.836 filename=/dev/nvme0n3 00:30:14.836 [job3] 00:30:14.836 filename=/dev/nvme0n4 00:30:14.836 Could not set queue depth (nvme0n1) 00:30:14.836 Could not set queue depth (nvme0n2) 00:30:14.836 Could not set queue depth (nvme0n3) 00:30:14.836 Could not set queue depth (nvme0n4) 00:30:14.836 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:14.837 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:14.837 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:14.837 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:14.837 fio-3.35 00:30:14.837 Starting 4 threads 00:30:16.231 00:30:16.231 job0: (groupid=0, jobs=1): err= 0: pid=106807: Fri Dec 6 13:24:04 2024 00:30:16.231 read: IOPS=3077, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1003msec) 00:30:16.231 slat (usec): min=7, max=6391, avg=148.71, stdev=722.38 00:30:16.231 clat (usec): min=2340, max=24520, avg=19132.48, stdev=2081.93 00:30:16.231 lat (usec): min=2354, max=25401, avg=19281.19, stdev=2019.17 00:30:16.231 clat percentiles (usec): 00:30:16.231 | 1.00th=[14091], 5.00th=[16581], 10.00th=[17171], 20.00th=[17695], 00:30:16.231 | 30.00th=[18220], 40.00th=[19006], 50.00th=[19268], 60.00th=[19530], 00:30:16.231 | 70.00th=[20055], 80.00th=[20579], 90.00th=[21627], 95.00th=[21890], 00:30:16.231 | 99.00th=[23462], 99.50th=[23987], 99.90th=[24511], 99.95th=[24511], 00:30:16.231 | 99.99th=[24511] 00:30:16.231 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:30:16.231 slat (usec): min=13, max=5523, avg=142.89, stdev=620.58 00:30:16.231 clat (usec): min=2678, max=25273, avg=18787.66, stdev=2793.27 00:30:16.231 lat (usec): min=2703, max=25297, avg=18930.56, stdev=2766.99 00:30:16.231 clat percentiles (usec): 00:30:16.231 | 1.00th=[11207], 5.00th=[14484], 10.00th=[15139], 20.00th=[16712], 00:30:16.231 | 30.00th=[17695], 40.00th=[18220], 50.00th=[19006], 60.00th=[19792], 00:30:16.231 | 70.00th=[20055], 80.00th=[20841], 90.00th=[22152], 95.00th=[22938], 00:30:16.231 | 99.00th=[24773], 99.50th=[25297], 99.90th=[25297], 99.95th=[25297], 00:30:16.231 | 99.99th=[25297] 00:30:16.231 bw ( KiB/s): min=13504, max=14300, per=30.91%, avg=13902.00, stdev=562.86, samples=2 00:30:16.231 iops : min= 3376, max= 3575, avg=3475.50, stdev=140.71, samples=2 00:30:16.231 lat (msec) : 4=0.27%, 10=0.48%, 20=66.74%, 50=32.51% 00:30:16.231 cpu : usr=3.29%, sys=10.68%, ctx=334, majf=0, minf=6 00:30:16.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:30:16.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:16.231 issued rwts: total=3087,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:16.231 job1: (groupid=0, jobs=1): err= 0: pid=106808: Fri Dec 6 13:24:04 2024 00:30:16.231 read: IOPS=1689, BW=6757KiB/s (6919kB/s)(6784KiB/1004msec) 00:30:16.231 slat (usec): min=4, max=11031, avg=275.64, stdev=1305.64 00:30:16.231 clat (usec): min=1190, max=53262, avg=34795.41, stdev=6708.19 00:30:16.231 lat (usec): min=9066, max=53276, avg=35071.05, stdev=6650.53 00:30:16.231 clat percentiles (usec): 00:30:16.231 | 1.00th=[ 9372], 5.00th=[26608], 10.00th=[28443], 20.00th=[31851], 00:30:16.231 | 30.00th=[32637], 40.00th=[33817], 50.00th=[34341], 60.00th=[35390], 00:30:16.231 | 70.00th=[36963], 80.00th=[39060], 90.00th=[42206], 95.00th=[46924], 00:30:16.231 | 99.00th=[49546], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:30:16.231 | 99.99th=[53216] 00:30:16.231 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:30:16.231 slat (usec): min=8, max=14047, avg=253.21, stdev=1224.12 00:30:16.231 clat (usec): min=20879, max=51154, avg=32637.39, stdev=6697.32 00:30:16.231 lat (usec): min=20904, max=51179, avg=32890.60, stdev=6679.59 00:30:16.231 clat percentiles (usec): 00:30:16.231 | 1.00th=[21627], 5.00th=[23725], 10.00th=[25297], 20.00th=[26608], 00:30:16.231 | 30.00th=[28967], 40.00th=[30016], 50.00th=[31589], 60.00th=[32375], 00:30:16.231 | 70.00th=[34341], 80.00th=[38011], 90.00th=[42730], 95.00th=[46400], 00:30:16.231 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:30:16.231 | 99.99th=[51119] 00:30:16.231 bw ( KiB/s): min= 8192, max= 8192, per=18.22%, avg=8192.00, stdev= 0.00, samples=2 00:30:16.231 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:30:16.231 lat (msec) : 2=0.03%, 10=0.85%, 20=0.85%, 50=96.21%, 100=2.06% 00:30:16.231 cpu : usr=2.49%, sys=5.38%, ctx=348, majf=0, minf=11 00:30:16.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:30:16.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:16.231 issued rwts: total=1696,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:16.231 job2: (groupid=0, jobs=1): err= 0: pid=106809: Fri Dec 6 13:24:04 2024 00:30:16.231 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:30:16.231 slat (usec): min=10, max=4919, avg=136.78, stdev=663.77 00:30:16.231 clat (usec): min=6009, max=21506, avg=17974.35, stdev=1814.07 00:30:16.231 lat (usec): min=6024, max=23947, avg=18111.12, stdev=1716.58 00:30:16.231 clat percentiles (usec): 00:30:16.231 | 1.00th=[10552], 5.00th=[15008], 10.00th=[16319], 20.00th=[17171], 00:30:16.231 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18482], 00:30:16.231 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19792], 95.00th=[20317], 00:30:16.231 | 99.00th=[21103], 99.50th=[21365], 99.90th=[21365], 99.95th=[21627], 00:30:16.231 | 99.99th=[21627] 00:30:16.231 write: IOPS=3597, BW=14.1MiB/s (14.7MB/s)(14.1MiB/1003msec); 0 zone resets 00:30:16.231 slat (usec): min=12, max=5300, avg=132.29, stdev=614.48 00:30:16.231 clat (usec): min=1820, max=24559, avg=17163.74, stdev=2481.36 00:30:16.231 lat (usec): min=1873, max=24621, avg=17296.03, stdev=2483.77 00:30:16.231 clat percentiles (usec): 00:30:16.231 | 1.00th=[13042], 5.00th=[13829], 10.00th=[14353], 20.00th=[15008], 00:30:16.231 | 30.00th=[15533], 40.00th=[16188], 50.00th=[17171], 60.00th=[17957], 00:30:16.231 | 70.00th=[18744], 80.00th=[19268], 90.00th=[20317], 95.00th=[20579], 00:30:16.231 | 99.00th=[23725], 99.50th=[24249], 99.90th=[24511], 99.95th=[24511], 00:30:16.231 | 99.99th=[24511] 00:30:16.231 bw ( KiB/s): min=12856, max=15847, per=31.91%, avg=14351.50, stdev=2114.96, samples=2 00:30:16.231 iops : min= 3214, max= 3961, avg=3587.50, stdev=528.21, samples=2 00:30:16.231 lat (msec) : 2=0.03%, 4=0.10%, 10=0.44%, 20=89.49%, 50=9.94% 00:30:16.231 cpu : usr=3.19%, sys=11.78%, ctx=336, majf=0, minf=1 00:30:16.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:30:16.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:16.231 issued rwts: total=3584,3608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:16.231 job3: (groupid=0, jobs=1): err= 0: pid=106810: Fri Dec 6 13:24:04 2024 00:30:16.231 read: IOPS=2027, BW=8112KiB/s (8306kB/s)(8136KiB/1003msec) 00:30:16.231 slat (usec): min=3, max=10956, avg=263.34, stdev=1289.57 00:30:16.231 clat (usec): min=887, max=49319, avg=33110.02, stdev=5900.66 00:30:16.231 lat (usec): min=9398, max=49336, avg=33373.36, stdev=5847.11 00:30:16.231 clat percentiles (usec): 00:30:16.231 | 1.00th=[ 9765], 5.00th=[24511], 10.00th=[27395], 20.00th=[29230], 00:30:16.231 | 30.00th=[31065], 40.00th=[32113], 50.00th=[33424], 60.00th=[34341], 00:30:16.231 | 70.00th=[35390], 80.00th=[37487], 90.00th=[40109], 95.00th=[42206], 00:30:16.231 | 99.00th=[46400], 99.50th=[47973], 99.90th=[47973], 99.95th=[49546], 00:30:16.231 | 99.99th=[49546] 00:30:16.231 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:30:16.231 slat (usec): min=10, max=13458, avg=218.71, stdev=1100.63 00:30:16.231 clat (usec): min=16894, max=39309, avg=28818.33, stdev=4057.16 00:30:16.231 lat (usec): min=16909, max=39347, avg=29037.05, stdev=4055.60 00:30:16.231 clat percentiles (usec): 00:30:16.231 | 1.00th=[21103], 5.00th=[22414], 10.00th=[23200], 20.00th=[25822], 00:30:16.231 | 30.00th=[26346], 40.00th=[27657], 50.00th=[28443], 60.00th=[30016], 00:30:16.231 | 70.00th=[31851], 80.00th=[32637], 90.00th=[34866], 95.00th=[35390], 00:30:16.231 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36963], 99.95th=[37487], 00:30:16.231 | 99.99th=[39060] 00:30:16.232 bw ( KiB/s): min= 8192, max= 8208, per=18.23%, avg=8200.00, stdev=11.31, samples=2 00:30:16.232 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:30:16.232 lat (usec) : 1000=0.02% 00:30:16.232 lat (msec) : 10=0.71%, 20=1.20%, 50=98.06% 00:30:16.232 cpu : usr=2.00%, sys=6.69%, ctx=358, majf=0, minf=9 00:30:16.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:30:16.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:16.232 issued rwts: total=2034,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:16.232 00:30:16.232 Run status group 0 (all jobs): 00:30:16.232 READ: bw=40.5MiB/s (42.4MB/s), 6757KiB/s-14.0MiB/s (6919kB/s-14.6MB/s), io=40.6MiB (42.6MB), run=1003-1004msec 00:30:16.232 WRITE: bw=43.9MiB/s (46.1MB/s), 8159KiB/s-14.1MiB/s (8355kB/s-14.7MB/s), io=44.1MiB (46.2MB), run=1003-1004msec 00:30:16.232 00:30:16.232 Disk stats (read/write): 00:30:16.232 nvme0n1: ios=2750/3072, merge=0/0, ticks=12002/13270, in_queue=25272, util=89.57% 00:30:16.232 nvme0n2: ios=1585/1686, merge=0/0, ticks=12701/13045, in_queue=25746, util=90.10% 00:30:16.232 nvme0n3: ios=3093/3196, merge=0/0, ticks=12894/12240, in_queue=25134, util=89.75% 00:30:16.232 nvme0n4: ios=1586/2048, merge=0/0, ticks=13545/15668, in_queue=29213, util=89.81% 00:30:16.232 13:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:16.232 [global] 00:30:16.232 thread=1 00:30:16.232 invalidate=1 00:30:16.232 rw=randwrite 00:30:16.232 time_based=1 00:30:16.232 runtime=1 00:30:16.232 ioengine=libaio 00:30:16.232 direct=1 00:30:16.232 bs=4096 00:30:16.232 iodepth=128 00:30:16.232 norandommap=0 00:30:16.232 numjobs=1 00:30:16.232 00:30:16.232 verify_dump=1 00:30:16.232 verify_backlog=512 00:30:16.232 verify_state_save=0 00:30:16.232 do_verify=1 00:30:16.232 verify=crc32c-intel 00:30:16.232 [job0] 00:30:16.232 filename=/dev/nvme0n1 00:30:16.232 [job1] 00:30:16.232 filename=/dev/nvme0n2 00:30:16.232 [job2] 00:30:16.232 filename=/dev/nvme0n3 00:30:16.232 [job3] 00:30:16.232 filename=/dev/nvme0n4 00:30:16.232 Could not set queue depth (nvme0n1) 00:30:16.232 Could not set queue depth (nvme0n2) 00:30:16.232 Could not set queue depth (nvme0n3) 00:30:16.232 Could not set queue depth (nvme0n4) 00:30:16.232 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:16.232 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:16.232 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:16.232 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:16.232 fio-3.35 00:30:16.232 Starting 4 threads 00:30:17.604 00:30:17.604 job0: (groupid=0, jobs=1): err= 0: pid=106863: Fri Dec 6 13:24:05 2024 00:30:17.604 read: IOPS=2575, BW=10.1MiB/s (10.5MB/s)(10.2MiB/1015msec) 00:30:17.604 slat (usec): min=5, max=19233, avg=192.93, stdev=1259.74 00:30:17.604 clat (usec): min=6297, max=83565, avg=20672.88, stdev=12714.41 00:30:17.604 lat (usec): min=6311, max=83586, avg=20865.81, stdev=12858.57 00:30:17.604 clat percentiles (usec): 00:30:17.604 | 1.00th=[ 6783], 5.00th=[10683], 10.00th=[11469], 20.00th=[12911], 00:30:17.604 | 30.00th=[13304], 40.00th=[13960], 50.00th=[16057], 60.00th=[19792], 00:30:17.604 | 70.00th=[22414], 80.00th=[24249], 90.00th=[34866], 95.00th=[44303], 00:30:17.604 | 99.00th=[80217], 99.50th=[81265], 99.90th=[83362], 99.95th=[83362], 00:30:17.604 | 99.99th=[83362] 00:30:17.604 write: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec); 0 zone resets 00:30:17.604 slat (usec): min=4, max=28792, avg=154.62, stdev=1137.16 00:30:17.604 clat (usec): min=4683, max=83526, avg=24278.23, stdev=13488.02 00:30:17.604 lat (usec): min=4705, max=83536, avg=24432.85, stdev=13546.27 00:30:17.604 clat percentiles (usec): 00:30:17.604 | 1.00th=[ 6456], 5.00th=[10552], 10.00th=[11600], 20.00th=[12256], 00:30:17.604 | 30.00th=[19530], 40.00th=[21890], 50.00th=[23200], 60.00th=[24249], 00:30:17.604 | 70.00th=[25297], 80.00th=[25822], 90.00th=[35390], 95.00th=[60031], 00:30:17.604 | 99.00th=[72877], 99.50th=[73925], 99.90th=[77071], 99.95th=[83362], 00:30:17.604 | 99.99th=[83362] 00:30:17.604 bw ( KiB/s): min=11008, max=12976, per=18.48%, avg=11992.00, stdev=1391.59, samples=2 00:30:17.604 iops : min= 2752, max= 3244, avg=2998.00, stdev=347.90, samples=2 00:30:17.605 lat (msec) : 10=2.43%, 20=42.42%, 50=49.14%, 100=6.01% 00:30:17.605 cpu : usr=2.27%, sys=7.20%, ctx=290, majf=0, minf=13 00:30:17.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:30:17.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:17.605 issued rwts: total=2614,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:17.605 job1: (groupid=0, jobs=1): err= 0: pid=106864: Fri Dec 6 13:24:05 2024 00:30:17.605 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:30:17.605 slat (usec): min=3, max=10242, avg=88.39, stdev=616.39 00:30:17.605 clat (usec): min=5524, max=24313, avg=11775.53, stdev=2861.90 00:30:17.605 lat (usec): min=5537, max=24332, avg=11863.92, stdev=2903.17 00:30:17.605 clat percentiles (usec): 00:30:17.605 | 1.00th=[ 6128], 5.00th=[ 8291], 10.00th=[ 8979], 20.00th=[ 9896], 00:30:17.605 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11076], 60.00th=[11469], 00:30:17.605 | 70.00th=[12256], 80.00th=[13304], 90.00th=[15664], 95.00th=[18482], 00:30:17.605 | 99.00th=[20841], 99.50th=[21365], 99.90th=[24249], 99.95th=[24249], 00:30:17.605 | 99.99th=[24249] 00:30:17.605 write: IOPS=5688, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1004msec); 0 zone resets 00:30:17.605 slat (usec): min=5, max=8407, avg=80.74, stdev=524.50 00:30:17.605 clat (usec): min=2281, max=21810, avg=10678.05, stdev=1896.28 00:30:17.605 lat (usec): min=3708, max=21818, avg=10758.79, stdev=1948.08 00:30:17.605 clat percentiles (usec): 00:30:17.605 | 1.00th=[ 5276], 5.00th=[ 7177], 10.00th=[ 8291], 20.00th=[ 9634], 00:30:17.605 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:30:17.605 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12387], 95.00th=[12649], 00:30:17.605 | 99.00th=[15795], 99.50th=[17957], 99.90th=[21627], 99.95th=[21627], 00:30:17.605 | 99.99th=[21890] 00:30:17.605 bw ( KiB/s): min=20536, max=24576, per=34.77%, avg=22556.00, stdev=2856.71, samples=2 00:30:17.605 iops : min= 5134, max= 6144, avg=5639.00, stdev=714.18, samples=2 00:30:17.605 lat (msec) : 4=0.06%, 10=25.84%, 20=73.02%, 50=1.08% 00:30:17.605 cpu : usr=4.89%, sys=13.66%, ctx=490, majf=0, minf=9 00:30:17.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:30:17.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:17.605 issued rwts: total=5632,5711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:17.605 job2: (groupid=0, jobs=1): err= 0: pid=106865: Fri Dec 6 13:24:05 2024 00:30:17.605 read: IOPS=4687, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1010msec) 00:30:17.605 slat (usec): min=4, max=11981, avg=103.98, stdev=724.87 00:30:17.605 clat (usec): min=2339, max=25640, avg=13542.19, stdev=3351.29 00:30:17.605 lat (usec): min=5625, max=26072, avg=13646.17, stdev=3388.87 00:30:17.605 clat percentiles (usec): 00:30:17.605 | 1.00th=[ 6325], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10945], 00:30:17.605 | 30.00th=[11994], 40.00th=[12518], 50.00th=[12911], 60.00th=[13173], 00:30:17.605 | 70.00th=[13829], 80.00th=[15926], 90.00th=[17957], 95.00th=[21103], 00:30:17.605 | 99.00th=[24249], 99.50th=[24773], 99.90th=[25560], 99.95th=[25560], 00:30:17.605 | 99.99th=[25560] 00:30:17.605 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec); 0 zone resets 00:30:17.605 slat (usec): min=5, max=10388, avg=93.10, stdev=621.99 00:30:17.605 clat (usec): min=3349, max=25560, avg=12479.51, stdev=2192.46 00:30:17.605 lat (usec): min=3374, max=25571, avg=12572.61, stdev=2259.27 00:30:17.605 clat percentiles (usec): 00:30:17.605 | 1.00th=[ 6194], 5.00th=[ 7701], 10.00th=[ 9896], 20.00th=[11600], 00:30:17.605 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[13042], 00:30:17.605 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14222], 95.00th=[14484], 00:30:17.605 | 99.00th=[20579], 99.50th=[20841], 99.90th=[24773], 99.95th=[25035], 00:30:17.605 | 99.99th=[25560] 00:30:17.605 bw ( KiB/s): min=20464, max=20480, per=31.55%, avg=20472.00, stdev=11.31, samples=2 00:30:17.605 iops : min= 5116, max= 5120, avg=5118.00, stdev= 2.83, samples=2 00:30:17.605 lat (msec) : 4=0.08%, 10=9.91%, 20=86.16%, 50=3.85% 00:30:17.605 cpu : usr=3.57%, sys=13.48%, ctx=457, majf=0, minf=14 00:30:17.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:30:17.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:17.605 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:17.605 job3: (groupid=0, jobs=1): err= 0: pid=106866: Fri Dec 6 13:24:05 2024 00:30:17.605 read: IOPS=2352, BW=9410KiB/s (9636kB/s)(9476KiB/1007msec) 00:30:17.605 slat (usec): min=6, max=20405, avg=171.12, stdev=1155.19 00:30:17.605 clat (usec): min=4789, max=45811, avg=21447.61, stdev=7421.77 00:30:17.605 lat (usec): min=5993, max=45837, avg=21618.74, stdev=7494.50 00:30:17.605 clat percentiles (usec): 00:30:17.605 | 1.00th=[10683], 5.00th=[11994], 10.00th=[12911], 20.00th=[13829], 00:30:17.605 | 30.00th=[15139], 40.00th=[19006], 50.00th=[21627], 60.00th=[22938], 00:30:17.605 | 70.00th=[24249], 80.00th=[28967], 90.00th=[33162], 95.00th=[34341], 00:30:17.605 | 99.00th=[37487], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:30:17.605 | 99.99th=[45876] 00:30:17.605 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:30:17.605 slat (usec): min=5, max=18183, avg=224.14, stdev=1231.57 00:30:17.605 clat (msec): min=3, max=106, avg=29.99, stdev=18.15 00:30:17.605 lat (msec): min=3, max=106, avg=30.21, stdev=18.24 00:30:17.605 clat percentiles (msec): 00:30:17.605 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 22], 00:30:17.605 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:30:17.605 | 70.00th=[ 26], 80.00th=[ 33], 90.00th=[ 58], 95.00th=[ 66], 00:30:17.605 | 99.00th=[ 104], 99.50th=[ 105], 99.90th=[ 105], 99.95th=[ 105], 00:30:17.605 | 99.99th=[ 107] 00:30:17.605 bw ( KiB/s): min= 9584, max=10896, per=15.78%, avg=10240.00, stdev=927.72, samples=2 00:30:17.605 iops : min= 2396, max= 2724, avg=2560.00, stdev=231.93, samples=2 00:30:17.605 lat (msec) : 4=0.14%, 10=1.56%, 20=26.78%, 50=64.29%, 100=6.27% 00:30:17.605 lat (msec) : 250=0.95% 00:30:17.605 cpu : usr=2.19%, sys=7.55%, ctx=301, majf=0, minf=11 00:30:17.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:30:17.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:17.605 issued rwts: total=2369,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:17.605 00:30:17.605 Run status group 0 (all jobs): 00:30:17.605 READ: bw=59.1MiB/s (61.9MB/s), 9410KiB/s-21.9MiB/s (9636kB/s-23.0MB/s), io=60.0MiB (62.9MB), run=1004-1015msec 00:30:17.605 WRITE: bw=63.4MiB/s (66.4MB/s), 9.93MiB/s-22.2MiB/s (10.4MB/s-23.3MB/s), io=64.3MiB (67.4MB), run=1004-1015msec 00:30:17.605 00:30:17.605 Disk stats (read/write): 00:30:17.605 nvme0n1: ios=2097/2560, merge=0/0, ticks=39716/57927, in_queue=97643, util=86.07% 00:30:17.605 nvme0n2: ios=4608/4927, merge=0/0, ticks=50194/49460, in_queue=99654, util=86.27% 00:30:17.605 nvme0n3: ios=4057/4096, merge=0/0, ticks=52234/48485, in_queue=100719, util=88.71% 00:30:17.605 nvme0n4: ios=2048/2087, merge=0/0, ticks=42513/58581, in_queue=101094, util=89.37% 00:30:17.605 13:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:17.605 13:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=106879 00:30:17.605 13:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:17.605 13:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:17.605 [global] 00:30:17.605 thread=1 00:30:17.605 invalidate=1 00:30:17.605 rw=read 00:30:17.605 time_based=1 00:30:17.605 runtime=10 00:30:17.605 ioengine=libaio 00:30:17.605 direct=1 00:30:17.605 bs=4096 00:30:17.605 iodepth=1 00:30:17.605 norandommap=1 00:30:17.605 numjobs=1 00:30:17.605 00:30:17.605 [job0] 00:30:17.605 filename=/dev/nvme0n1 00:30:17.605 [job1] 00:30:17.605 filename=/dev/nvme0n2 00:30:17.605 [job2] 00:30:17.605 filename=/dev/nvme0n3 00:30:17.605 [job3] 00:30:17.605 filename=/dev/nvme0n4 00:30:17.605 Could not set queue depth (nvme0n1) 00:30:17.605 Could not set queue depth (nvme0n2) 00:30:17.605 Could not set queue depth (nvme0n3) 00:30:17.605 Could not set queue depth (nvme0n4) 00:30:17.605 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:17.605 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:17.605 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:17.605 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:17.605 fio-3.35 00:30:17.605 Starting 4 threads 00:30:20.887 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:30:20.887 fio: pid=106923, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:20.887 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39387136, buflen=4096 00:30:20.887 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:30:20.887 fio: pid=106921, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:20.887 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=38404096, buflen=4096 00:30:20.887 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:20.887 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:30:21.144 fio: pid=106919, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:21.144 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=50892800, buflen=4096 00:30:21.144 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:21.144 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:30:21.402 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=48631808, buflen=4096 00:30:21.402 fio: pid=106920, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:21.660 00:30:21.660 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106919: Fri Dec 6 13:24:10 2024 00:30:21.660 read: IOPS=3587, BW=14.0MiB/s (14.7MB/s)(48.5MiB/3464msec) 00:30:21.660 slat (usec): min=10, max=10447, avg=17.72, stdev=163.66 00:30:21.660 clat (usec): min=158, max=112611, avg=259.61, stdev=1011.53 00:30:21.660 lat (usec): min=171, max=112628, avg=277.34, stdev=1024.69 00:30:21.660 clat percentiles (usec): 00:30:21.660 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 202], 00:30:21.660 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 233], 00:30:21.660 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 330], 95.00th=[ 351], 00:30:21.660 | 99.00th=[ 433], 99.50th=[ 445], 99.90th=[ 627], 99.95th=[ 1811], 00:30:21.660 | 99.99th=[ 4080] 00:30:21.660 bw ( KiB/s): min=11776, max=17536, per=31.82%, avg=14594.67, stdev=2897.52, samples=6 00:30:21.660 iops : min= 2944, max= 4384, avg=3648.67, stdev=724.38, samples=6 00:30:21.660 lat (usec) : 250=65.81%, 500=34.06%, 750=0.06%, 1000=0.01% 00:30:21.660 lat (msec) : 2=0.02%, 4=0.02%, 10=0.01%, 250=0.01% 00:30:21.660 cpu : usr=0.98%, sys=4.45%, ctx=12442, majf=0, minf=1 00:30:21.660 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.660 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.660 issued rwts: total=12426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.660 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:21.660 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106920: Fri Dec 6 13:24:10 2024 00:30:21.660 read: IOPS=3144, BW=12.3MiB/s (12.9MB/s)(46.4MiB/3776msec) 00:30:21.660 slat (usec): min=8, max=7808, avg=19.52, stdev=166.89 00:30:21.660 clat (usec): min=161, max=3815, avg=296.99, stdev=91.72 00:30:21.660 lat (usec): min=172, max=8108, avg=316.51, stdev=189.36 00:30:21.660 clat percentiles (usec): 00:30:21.660 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 219], 00:30:21.660 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 318], 00:30:21.660 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 363], 00:30:21.660 | 99.00th=[ 437], 99.50th=[ 562], 99.90th=[ 955], 99.95th=[ 1467], 00:30:21.660 | 99.99th=[ 3392] 00:30:21.660 bw ( KiB/s): min=11336, max=15398, per=26.72%, avg=12254.57, stdev=1417.38, samples=7 00:30:21.660 iops : min= 2834, max= 3849, avg=3063.57, stdev=354.16, samples=7 00:30:21.660 lat (usec) : 250=23.24%, 500=76.04%, 750=0.50%, 1000=0.13% 00:30:21.660 lat (msec) : 2=0.04%, 4=0.04% 00:30:21.660 cpu : usr=0.77%, sys=4.21%, ctx=11888, majf=0, minf=2 00:30:21.660 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.660 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.660 issued rwts: total=11874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.660 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:21.660 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106921: Fri Dec 6 13:24:10 2024 00:30:21.660 read: IOPS=2930, BW=11.4MiB/s (12.0MB/s)(36.6MiB/3200msec) 00:30:21.660 slat (usec): min=9, max=12961, avg=17.32, stdev=164.52 00:30:21.660 clat (usec): min=172, max=4186, avg=322.55, stdev=78.42 00:30:21.661 lat (usec): min=183, max=13243, avg=339.87, stdev=182.05 00:30:21.661 clat percentiles (usec): 00:30:21.661 | 1.00th=[ 202], 5.00th=[ 258], 10.00th=[ 289], 20.00th=[ 302], 00:30:21.661 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 326], 00:30:21.661 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 355], 95.00th=[ 371], 00:30:21.661 | 99.00th=[ 449], 99.50th=[ 537], 99.90th=[ 1172], 99.95th=[ 1827], 00:30:21.661 | 99.99th=[ 4178] 00:30:21.661 bw ( KiB/s): min=11248, max=12152, per=25.36%, avg=11628.00, stdev=362.63, samples=6 00:30:21.661 iops : min= 2812, max= 3038, avg=2907.00, stdev=90.66, samples=6 00:30:21.661 lat (usec) : 250=4.44%, 500=94.90%, 750=0.46%, 1000=0.04% 00:30:21.661 lat (msec) : 2=0.11%, 4=0.03%, 10=0.01% 00:30:21.661 cpu : usr=0.91%, sys=3.25%, ctx=9389, majf=0, minf=1 00:30:21.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.661 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.661 issued rwts: total=9377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:21.661 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106923: Fri Dec 6 13:24:10 2024 00:30:21.661 read: IOPS=3292, BW=12.9MiB/s (13.5MB/s)(37.6MiB/2921msec) 00:30:21.661 slat (nsec): min=10934, max=72938, avg=14800.22, stdev=3676.62 00:30:21.661 clat (usec): min=187, max=2248, avg=287.27, stdev=53.01 00:30:21.661 lat (usec): min=198, max=2263, avg=302.07, stdev=52.42 00:30:21.661 clat percentiles (usec): 00:30:21.661 | 1.00th=[ 215], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 249], 00:30:21.661 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 297], 00:30:21.661 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 363], 00:30:21.661 | 99.00th=[ 424], 99.50th=[ 433], 99.90th=[ 474], 99.95th=[ 676], 00:30:21.661 | 99.99th=[ 2245] 00:30:21.661 bw ( KiB/s): min=11960, max=14472, per=29.22%, avg=13400.00, stdev=1234.97, samples=5 00:30:21.661 iops : min= 2990, max= 3618, avg=3350.00, stdev=308.74, samples=5 00:30:21.661 lat (usec) : 250=20.84%, 500=79.09%, 750=0.02%, 1000=0.01% 00:30:21.661 lat (msec) : 2=0.01%, 4=0.02% 00:30:21.661 cpu : usr=0.89%, sys=4.21%, ctx=9626, majf=0, minf=2 00:30:21.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.661 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.661 issued rwts: total=9617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:21.661 00:30:21.661 Run status group 0 (all jobs): 00:30:21.661 READ: bw=44.8MiB/s (47.0MB/s), 11.4MiB/s-14.0MiB/s (12.0MB/s-14.7MB/s), io=169MiB (177MB), run=2921-3776msec 00:30:21.661 00:30:21.661 Disk stats (read/write): 00:30:21.661 nvme0n1: ios=11979/0, merge=0/0, ticks=3144/0, in_queue=3144, util=95.28% 00:30:21.661 nvme0n2: ios=11154/0, merge=0/0, ticks=3411/0, in_queue=3411, util=95.61% 00:30:21.661 nvme0n3: ios=9107/0, merge=0/0, ticks=2972/0, in_queue=2972, util=96.12% 00:30:21.661 nvme0n4: ios=9462/0, merge=0/0, ticks=2727/0, in_queue=2727, util=96.69% 00:30:21.661 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:21.661 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:30:21.918 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:21.918 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:30:22.176 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:22.176 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:30:22.434 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:22.434 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:30:22.691 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:22.691 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 106879 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:22.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:22.949 nvmf hotplug test: fio failed as expected 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:30:22.949 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:23.208 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:30:23.208 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:30:23.208 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:30:23.208 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:30:23.208 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:30:23.208 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.208 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:30:23.208 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.208 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:30:23.208 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.208 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.208 rmmod nvme_tcp 00:30:23.208 rmmod nvme_fabrics 00:30:23.208 rmmod nvme_keyring 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 106392 ']' 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 106392 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 106392 ']' 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 106392 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106392 00:30:23.466 killing process with pid 106392 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106392' 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 106392 00:30:23.466 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 106392 00:30:23.466 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:23.466 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:23.466 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:23.466 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:30:23.466 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:30:23.466 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:23.466 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:30:23.725 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:30:23.726 00:30:23.726 real 0m20.449s 00:30:23.726 user 1m0.892s 00:30:23.726 sys 0m10.124s 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.726 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:23.726 ************************************ 00:30:23.726 END TEST nvmf_fio_target 00:30:23.726 ************************************ 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:23.985 ************************************ 00:30:23.985 START TEST nvmf_bdevio 00:30:23.985 ************************************ 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:23.985 * Looking for test storage... 00:30:23.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:23.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.985 --rc genhtml_branch_coverage=1 00:30:23.985 --rc genhtml_function_coverage=1 00:30:23.985 --rc genhtml_legend=1 00:30:23.985 --rc geninfo_all_blocks=1 00:30:23.985 --rc geninfo_unexecuted_blocks=1 00:30:23.985 00:30:23.985 ' 00:30:23.985 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:23.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.985 --rc genhtml_branch_coverage=1 00:30:23.985 --rc genhtml_function_coverage=1 00:30:23.985 --rc genhtml_legend=1 00:30:23.986 --rc geninfo_all_blocks=1 00:30:23.986 --rc geninfo_unexecuted_blocks=1 00:30:23.986 00:30:23.986 ' 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:23.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.986 --rc genhtml_branch_coverage=1 00:30:23.986 --rc genhtml_function_coverage=1 00:30:23.986 --rc genhtml_legend=1 00:30:23.986 --rc geninfo_all_blocks=1 00:30:23.986 --rc geninfo_unexecuted_blocks=1 00:30:23.986 00:30:23.986 ' 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:23.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.986 --rc genhtml_branch_coverage=1 00:30:23.986 --rc genhtml_function_coverage=1 00:30:23.986 --rc genhtml_legend=1 00:30:23.986 --rc geninfo_all_blocks=1 00:30:23.986 --rc geninfo_unexecuted_blocks=1 00:30:23.986 00:30:23.986 ' 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.986 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:24.245 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:24.246 Cannot find device "nvmf_init_br" 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:24.246 Cannot find device "nvmf_init_br2" 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:24.246 Cannot find device "nvmf_tgt_br" 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:24.246 Cannot find device "nvmf_tgt_br2" 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:24.246 Cannot find device "nvmf_init_br" 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:24.246 Cannot find device "nvmf_init_br2" 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:24.246 Cannot find device "nvmf_tgt_br" 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:24.246 Cannot find device "nvmf_tgt_br2" 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:24.246 Cannot find device "nvmf_br" 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:24.246 Cannot find device "nvmf_init_if" 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:24.246 Cannot find device "nvmf_init_if2" 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:24.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:24.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:24.246 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:24.505 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:24.505 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:30:24.505 00:30:24.505 --- 10.0.0.3 ping statistics --- 00:30:24.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.505 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:30:24.505 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:24.505 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:24.505 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:30:24.505 00:30:24.505 --- 10.0.0.4 ping statistics --- 00:30:24.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.505 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:24.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:24.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:30:24.505 00:30:24.505 --- 10.0.0.1 ping statistics --- 00:30:24.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.505 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:24.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:24.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:30:24.505 00:30:24.505 --- 10.0.0.2 ping statistics --- 00:30:24.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.505 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:24.505 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:24.506 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=107300 00:30:24.506 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:30:24.506 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 107300 00:30:24.506 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 107300 ']' 00:30:24.506 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.506 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.506 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.506 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.506 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:24.506 [2024-12-06 13:24:13.112156] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:24.506 [2024-12-06 13:24:13.113545] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:30:24.506 [2024-12-06 13:24:13.113623] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.764 [2024-12-06 13:24:13.266628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:24.764 [2024-12-06 13:24:13.332588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.764 [2024-12-06 13:24:13.332661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.764 [2024-12-06 13:24:13.332683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.764 [2024-12-06 13:24:13.332693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.764 [2024-12-06 13:24:13.332702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.764 [2024-12-06 13:24:13.334058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:24.764 [2024-12-06 13:24:13.334193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:24.764 [2024-12-06 13:24:13.334329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:24.764 [2024-12-06 13:24:13.334331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:25.022 [2024-12-06 13:24:13.441840] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:25.022 [2024-12-06 13:24:13.441956] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:25.022 [2024-12-06 13:24:13.442213] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:25.022 [2024-12-06 13:24:13.442381] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:25.022 [2024-12-06 13:24:13.442674] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.022 [2024-12-06 13:24:13.531387] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.022 Malloc0 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.022 [2024-12-06 13:24:13.599350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:25.022 { 00:30:25.022 "params": { 00:30:25.022 "name": "Nvme$subsystem", 00:30:25.022 "trtype": "$TEST_TRANSPORT", 00:30:25.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.022 "adrfam": "ipv4", 00:30:25.022 "trsvcid": "$NVMF_PORT", 00:30:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.022 "hdgst": ${hdgst:-false}, 00:30:25.022 "ddgst": ${ddgst:-false} 00:30:25.022 }, 00:30:25.022 "method": "bdev_nvme_attach_controller" 00:30:25.022 } 00:30:25.022 EOF 00:30:25.022 )") 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:30:25.022 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:25.022 "params": { 00:30:25.022 "name": "Nvme1", 00:30:25.022 "trtype": "tcp", 00:30:25.022 "traddr": "10.0.0.3", 00:30:25.022 "adrfam": "ipv4", 00:30:25.022 "trsvcid": "4420", 00:30:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:25.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:25.023 "hdgst": false, 00:30:25.023 "ddgst": false 00:30:25.023 }, 00:30:25.023 "method": "bdev_nvme_attach_controller" 00:30:25.023 }' 00:30:25.281 [2024-12-06 13:24:13.663056] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:30:25.281 [2024-12-06 13:24:13.663156] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107342 ] 00:30:25.281 [2024-12-06 13:24:13.817607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:25.281 [2024-12-06 13:24:13.885977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.281 [2024-12-06 13:24:13.886113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:25.281 [2024-12-06 13:24:13.886119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.539 I/O targets: 00:30:25.539 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:25.539 00:30:25.539 00:30:25.539 CUnit - A unit testing framework for C - Version 2.1-3 00:30:25.539 http://cunit.sourceforge.net/ 00:30:25.539 00:30:25.539 00:30:25.539 Suite: bdevio tests on: Nvme1n1 00:30:25.539 Test: blockdev write read block ...passed 00:30:25.539 Test: blockdev write zeroes read block ...passed 00:30:25.539 Test: blockdev write zeroes read no split ...passed 00:30:25.539 Test: blockdev write zeroes read split ...passed 00:30:25.797 Test: blockdev write zeroes read split partial ...passed 00:30:25.797 Test: blockdev reset ...[2024-12-06 13:24:14.179403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:25.797 [2024-12-06 13:24:14.179515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe077b0 (9): Bad file descriptor 00:30:25.797 [2024-12-06 13:24:14.182958] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:30:25.797 passed 00:30:25.797 Test: blockdev write read 8 blocks ...passed 00:30:25.797 Test: blockdev write read size > 128k ...passed 00:30:25.797 Test: blockdev write read invalid size ...passed 00:30:25.797 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:25.797 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:25.797 Test: blockdev write read max offset ...passed 00:30:25.797 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:25.797 Test: blockdev writev readv 8 blocks ...passed 00:30:25.797 Test: blockdev writev readv 30 x 1block ...passed 00:30:25.797 Test: blockdev writev readv block ...passed 00:30:25.797 Test: blockdev writev readv size > 128k ...passed 00:30:25.797 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:25.797 Test: blockdev comparev and writev ...[2024-12-06 13:24:14.359363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:25.797 [2024-12-06 13:24:14.359401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.797 [2024-12-06 13:24:14.359422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:25.797 [2024-12-06 13:24:14.359432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:25.797 [2024-12-06 13:24:14.360010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:25.797 [2024-12-06 13:24:14.360036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:25.797 [2024-12-06 13:24:14.360054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:25.797 [2024-12-06 13:24:14.360066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:25.797 [2024-12-06 13:24:14.360623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:25.798 [2024-12-06 13:24:14.360652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:25.798 [2024-12-06 13:24:14.360670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:25.798 [2024-12-06 13:24:14.360681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:25.798 [2024-12-06 13:24:14.361228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:25.798 [2024-12-06 13:24:14.361257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:25.798 [2024-12-06 13:24:14.361274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:25.798 [2024-12-06 13:24:14.361284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:25.798 passed 00:30:26.056 Test: blockdev nvme passthru rw ...passed 00:30:26.056 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:24:14.445843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:26.056 [2024-12-06 13:24:14.445874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:26.056 [2024-12-06 13:24:14.446071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:26.056 [2024-12-06 13:24:14.446090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:26.057 [2024-12-06 13:24:14.446578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:26.057 [2024-12-06 13:24:14.446609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:26.057 [2024-12-06 13:24:14.446835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:26.057 [2024-12-06 13:24:14.446919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:26.057 passed 00:30:26.057 Test: blockdev nvme admin passthru ...passed 00:30:26.057 Test: blockdev copy ...passed 00:30:26.057 00:30:26.057 Run Summary: Type Total Ran Passed Failed Inactive 00:30:26.057 suites 1 1 n/a 0 0 00:30:26.057 tests 23 23 23 0 0 00:30:26.057 asserts 152 152 152 0 n/a 00:30:26.057 00:30:26.057 Elapsed time = 0.861 seconds 00:30:26.057 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:26.057 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.057 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:26.057 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.057 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:26.057 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:30:26.057 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:26.057 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:30:26.315 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:26.315 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:30:26.315 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:26.315 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:26.315 rmmod nvme_tcp 00:30:26.315 rmmod nvme_fabrics 00:30:26.315 rmmod nvme_keyring 00:30:26.315 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:26.315 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:30:26.315 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:30:26.315 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 107300 ']' 00:30:26.315 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 107300 00:30:26.315 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 107300 ']' 00:30:26.315 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 107300 00:30:26.316 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:30:26.316 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.316 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107300 00:30:26.316 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:30:26.316 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:30:26.316 killing process with pid 107300 00:30:26.316 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107300' 00:30:26.316 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 107300 00:30:26.316 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 107300 00:30:26.602 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:26.603 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:26.937 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:26.937 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.937 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.937 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.937 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:30:26.937 00:30:26.937 real 0m2.868s 00:30:26.937 user 0m7.133s 00:30:26.937 sys 0m1.271s 00:30:26.937 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.937 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:26.937 ************************************ 00:30:26.937 END TEST nvmf_bdevio 00:30:26.937 ************************************ 00:30:26.937 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:26.937 00:30:26.937 real 3m37.740s 00:30:26.937 user 9m42.328s 00:30:26.937 sys 1m19.257s 00:30:26.937 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.937 13:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:26.937 ************************************ 00:30:26.937 END TEST nvmf_target_core_interrupt_mode 00:30:26.937 ************************************ 00:30:26.937 13:24:15 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:26.937 13:24:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:26.937 13:24:15 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.937 13:24:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:26.937 ************************************ 00:30:26.937 START TEST nvmf_interrupt 00:30:26.937 ************************************ 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:26.937 * Looking for test storage... 00:30:26.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:26.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.937 --rc genhtml_branch_coverage=1 00:30:26.937 --rc genhtml_function_coverage=1 00:30:26.937 --rc genhtml_legend=1 00:30:26.937 --rc geninfo_all_blocks=1 00:30:26.937 --rc geninfo_unexecuted_blocks=1 00:30:26.937 00:30:26.937 ' 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:26.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.937 --rc genhtml_branch_coverage=1 00:30:26.937 --rc genhtml_function_coverage=1 00:30:26.937 --rc genhtml_legend=1 00:30:26.937 --rc geninfo_all_blocks=1 00:30:26.937 --rc geninfo_unexecuted_blocks=1 00:30:26.937 00:30:26.937 ' 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:26.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.937 --rc genhtml_branch_coverage=1 00:30:26.937 --rc genhtml_function_coverage=1 00:30:26.937 --rc genhtml_legend=1 00:30:26.937 --rc geninfo_all_blocks=1 00:30:26.937 --rc geninfo_unexecuted_blocks=1 00:30:26.937 00:30:26.937 ' 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:26.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.937 --rc genhtml_branch_coverage=1 00:30:26.937 --rc genhtml_function_coverage=1 00:30:26.937 --rc genhtml_legend=1 00:30:26.937 --rc geninfo_all_blocks=1 00:30:26.937 --rc geninfo_unexecuted_blocks=1 00:30:26.937 00:30:26.937 ' 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.937 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:30:26.938 13:24:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:27.195 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:27.196 Cannot find device "nvmf_init_br" 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:27.196 Cannot find device "nvmf_init_br2" 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:27.196 Cannot find device "nvmf_tgt_br" 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:27.196 Cannot find device "nvmf_tgt_br2" 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:27.196 Cannot find device "nvmf_init_br" 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:27.196 Cannot find device "nvmf_init_br2" 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:27.196 Cannot find device "nvmf_tgt_br" 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:27.196 Cannot find device "nvmf_tgt_br2" 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:27.196 Cannot find device "nvmf_br" 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:27.196 Cannot find device "nvmf_init_if" 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:27.196 Cannot find device "nvmf_init_if2" 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:27.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:27.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:27.196 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:27.454 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:27.454 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:27.454 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:27.454 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:27.454 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:27.454 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:27.454 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:27.454 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:27.454 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:27.454 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:27.454 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:27.454 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:27.455 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:27.455 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:30:27.455 00:30:27.455 --- 10.0.0.3 ping statistics --- 00:30:27.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.455 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:27.455 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:27.455 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:30:27.455 00:30:27.455 --- 10.0.0.4 ping statistics --- 00:30:27.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.455 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:27.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:30:27.455 00:30:27.455 --- 10.0.0.1 ping statistics --- 00:30:27.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.455 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:27.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:30:27.455 00:30:27.455 --- 10.0.0.2 ping statistics --- 00:30:27.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.455 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=107590 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 107590 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 107590 ']' 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.455 13:24:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:27.455 [2024-12-06 13:24:16.052672] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:27.455 [2024-12-06 13:24:16.054129] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:30:27.455 [2024-12-06 13:24:16.054230] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.713 [2024-12-06 13:24:16.208012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:27.714 [2024-12-06 13:24:16.276971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.714 [2024-12-06 13:24:16.277028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.714 [2024-12-06 13:24:16.277043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.714 [2024-12-06 13:24:16.277053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.714 [2024-12-06 13:24:16.277062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.714 [2024-12-06 13:24:16.278523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.714 [2024-12-06 13:24:16.278535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.972 [2024-12-06 13:24:16.388706] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:27.972 [2024-12-06 13:24:16.389317] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:27.972 [2024-12-06 13:24:16.389362] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:28.540 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.540 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:30:28.540 13:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:28.540 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:28.540 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:28.540 13:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.540 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:30:28.540 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:30:28.540 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:28.540 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:30:28.799 5000+0 records in 00:30:28.799 5000+0 records out 00:30:28.799 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0304976 s, 336 MB/s 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:28.799 AIO0 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:28.799 [2024-12-06 13:24:17.239623] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:28.799 [2024-12-06 13:24:17.267911] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107590 0 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107590 0 idle 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107590 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107590 -w 256 00:30:28.799 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107590 root 20 0 64.2g 45440 33024 S 0.0 0.4 0:00.32 reactor_0' 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107590 root 20 0 64.2g 45440 33024 S 0.0 0.4 0:00.32 reactor_0 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107590 1 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107590 1 idle 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107590 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107590 -w 256 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107594 root 20 0 64.2g 45440 33024 S 0.0 0.4 0:00.00 reactor_1' 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107594 root 20 0 64.2g 45440 33024 S 0.0 0.4 0:00.00 reactor_1 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=107664 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:29.059 13:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107590 0 00:30:29.060 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107590 0 busy 00:30:29.060 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107590 00:30:29.060 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:29.060 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:29.060 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:29.060 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:29.060 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:29.060 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:29.060 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:29.060 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:29.060 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107590 -w 256 00:30:29.060 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:29.317 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107590 root 20 0 64.2g 45440 33024 S 0.0 0.4 0:00.32 reactor_0' 00:30:29.317 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107590 root 20 0 64.2g 45440 33024 S 0.0 0.4 0:00.32 reactor_0 00:30:29.317 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:29.317 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:29.317 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:29.317 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:29.317 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:29.317 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:29.317 13:24:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:30:30.249 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:30:30.249 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:30.249 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107590 -w 256 00:30:30.249 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107590 root 20 0 64.2g 46720 33408 R 99.9 0.4 0:01.74 reactor_0' 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107590 root 20 0 64.2g 46720 33408 R 99.9 0.4 0:01.74 reactor_0 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107590 1 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107590 1 busy 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107590 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107590 -w 256 00:30:30.507 13:24:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:30.765 13:24:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107594 root 20 0 64.2g 46720 33408 R 62.5 0.4 0:00.82 reactor_1' 00:30:30.765 13:24:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107594 root 20 0 64.2g 46720 33408 R 62.5 0.4 0:00.82 reactor_1 00:30:30.765 13:24:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:30.765 13:24:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:30.765 13:24:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=62.5 00:30:30.765 13:24:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=62 00:30:30.765 13:24:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:30.765 13:24:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:30.765 13:24:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:30.765 13:24:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:30.765 13:24:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 107664 00:30:40.738 Initializing NVMe Controllers 00:30:40.738 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:30:40.738 Controller IO queue size 256, less than required. 00:30:40.738 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:40.738 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:40.738 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:40.738 Initialization complete. Launching workers. 00:30:40.738 ======================================================== 00:30:40.738 Latency(us) 00:30:40.738 Device Information : IOPS MiB/s Average min max 00:30:40.738 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 6284.20 24.55 40847.06 4706.98 75037.71 00:30:40.738 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 6422.90 25.09 39942.59 7153.58 63726.44 00:30:40.738 ======================================================== 00:30:40.738 Total : 12707.10 49.64 40389.89 4706.98 75037.71 00:30:40.738 00:30:40.738 13:24:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:40.738 13:24:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107590 0 00:30:40.738 13:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107590 0 idle 00:30:40.738 13:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107590 00:30:40.738 13:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:40.738 13:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:40.738 13:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:40.738 13:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:40.738 13:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:40.738 13:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:40.738 13:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:40.738 13:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:40.739 13:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:40.739 13:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107590 -w 256 00:30:40.739 13:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107590 root 20 0 64.2g 46848 33536 S 0.0 0.4 0:13.99 reactor_0' 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107590 root 20 0 64.2g 46848 33536 S 0.0 0.4 0:13.99 reactor_0 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107590 1 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107590 1 idle 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107590 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107590 -w 256 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107594 root 20 0 64.2g 46848 33536 S 0.0 0.4 0:06.82 reactor_1' 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107594 root 20 0 64.2g 46848 33536 S 0.0 0.4 0:06.82 reactor_1 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:40.739 13:24:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107590 0 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107590 0 idle 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107590 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107590 -w 256 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107590 root 20 0 64.2g 49024 33536 S 0.0 0.4 0:14.05 reactor_0' 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107590 root 20 0 64.2g 49024 33536 S 0.0 0.4 0:14.05 reactor_0 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107590 1 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107590 1 idle 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107590 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107590 -w 256 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107594 root 20 0 64.2g 49024 33536 S 0.0 0.4 0:06.83 reactor_1' 00:30:42.112 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107594 root 20 0 64.2g 49024 33536 S 0.0 0.4 0:06.83 reactor_1 00:30:42.113 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:42.113 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:42.113 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:42.113 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:42.113 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:42.113 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:42.113 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:42.113 13:24:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:42.113 13:24:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:42.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:42.371 13:24:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:42.371 13:24:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:30:42.371 13:24:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:42.371 13:24:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:42.371 13:24:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:42.371 13:24:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:42.371 13:24:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:30:42.371 13:24:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:42.371 13:24:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:30:42.371 13:24:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:42.371 13:24:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:42.629 rmmod nvme_tcp 00:30:42.629 rmmod nvme_fabrics 00:30:42.629 rmmod nvme_keyring 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 107590 ']' 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 107590 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 107590 ']' 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 107590 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107590 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:42.629 killing process with pid 107590 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107590' 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 107590 00:30:42.629 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 107590 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:42.888 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:43.147 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:43.147 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:43.147 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:43.147 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:43.147 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:43.147 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:43.147 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:43.147 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.147 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:43.147 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.147 13:24:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:30:43.147 00:30:43.147 real 0m16.333s 00:30:43.147 user 0m28.584s 00:30:43.147 sys 0m7.504s 00:30:43.147 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.147 13:24:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:43.147 ************************************ 00:30:43.147 END TEST nvmf_interrupt 00:30:43.147 ************************************ 00:30:43.147 00:30:43.147 real 21m0.329s 00:30:43.147 user 55m11.724s 00:30:43.147 sys 5m2.345s 00:30:43.147 13:24:31 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.147 13:24:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.147 ************************************ 00:30:43.147 END TEST nvmf_tcp 00:30:43.147 ************************************ 00:30:43.406 13:24:31 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:30:43.406 13:24:31 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:43.406 13:24:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:43.406 13:24:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.406 13:24:31 -- common/autotest_common.sh@10 -- # set +x 00:30:43.406 ************************************ 00:30:43.406 START TEST spdkcli_nvmf_tcp 00:30:43.406 ************************************ 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:43.406 * Looking for test storage... 00:30:43.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:30:43.406 13:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:43.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.407 --rc genhtml_branch_coverage=1 00:30:43.407 --rc genhtml_function_coverage=1 00:30:43.407 --rc genhtml_legend=1 00:30:43.407 --rc geninfo_all_blocks=1 00:30:43.407 --rc geninfo_unexecuted_blocks=1 00:30:43.407 00:30:43.407 ' 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:43.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.407 --rc genhtml_branch_coverage=1 00:30:43.407 --rc genhtml_function_coverage=1 00:30:43.407 --rc genhtml_legend=1 00:30:43.407 --rc geninfo_all_blocks=1 00:30:43.407 --rc geninfo_unexecuted_blocks=1 00:30:43.407 00:30:43.407 ' 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:43.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.407 --rc genhtml_branch_coverage=1 00:30:43.407 --rc genhtml_function_coverage=1 00:30:43.407 --rc genhtml_legend=1 00:30:43.407 --rc geninfo_all_blocks=1 00:30:43.407 --rc geninfo_unexecuted_blocks=1 00:30:43.407 00:30:43.407 ' 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:43.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.407 --rc genhtml_branch_coverage=1 00:30:43.407 --rc genhtml_function_coverage=1 00:30:43.407 --rc genhtml_legend=1 00:30:43.407 --rc geninfo_all_blocks=1 00:30:43.407 --rc geninfo_unexecuted_blocks=1 00:30:43.407 00:30:43.407 ' 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:30:43.407 13:24:31 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:43.407 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=108002 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 108002 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 108002 ']' 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:43.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:43.407 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.705 [2024-12-06 13:24:32.079097] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:30:43.705 [2024-12-06 13:24:32.079202] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108002 ] 00:30:43.705 [2024-12-06 13:24:32.230315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:43.705 [2024-12-06 13:24:32.295555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.705 [2024-12-06 13:24:32.295566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.978 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:43.978 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:30:43.978 13:24:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:43.978 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:43.978 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.978 13:24:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:43.978 13:24:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:43.978 13:24:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:43.979 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:43.979 13:24:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.979 13:24:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:43.979 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:43.979 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:43.979 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:43.979 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:43.979 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:43.979 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:43.979 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:43.979 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:43.979 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:43.979 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:43.979 ' 00:30:47.258 [2024-12-06 13:24:35.267669] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:48.189 [2024-12-06 13:24:36.592742] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:50.714 [2024-12-06 13:24:39.026392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:52.641 [2024-12-06 13:24:41.147909] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:54.541 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:54.541 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:54.541 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:54.541 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:54.541 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:54.541 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:54.541 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:54.541 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:54.541 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:54.541 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:54.541 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:54.541 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:54.541 13:24:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:54.541 13:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:54.541 13:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:54.541 13:24:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:54.541 13:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:54.541 13:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:54.541 13:24:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:54.541 13:24:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:30:54.799 13:24:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:55.058 13:24:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:55.058 13:24:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:55.058 13:24:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:55.058 13:24:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.058 13:24:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:55.058 13:24:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:55.058 13:24:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.058 13:24:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:55.058 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:55.058 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:55.058 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:55.058 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:55.058 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:55.058 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:55.058 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:55.058 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:55.058 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:55.058 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:55.058 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:55.058 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:55.058 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:55.058 ' 00:31:01.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:01.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:01.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:01.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:01.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:01.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:01.614 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:01.614 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:01.614 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:01.614 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:01.614 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:01.614 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:01.614 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:01.614 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:01.614 13:24:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:01.614 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:01.614 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 108002 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 108002 ']' 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 108002 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108002 00:31:01.615 killing process with pid 108002 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108002' 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 108002 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 108002 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 108002 ']' 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 108002 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 108002 ']' 00:31:01.615 Process with pid 108002 is not found 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 108002 00:31:01.615 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (108002) - No such process 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 108002 is not found' 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:01.615 00:31:01.615 real 0m17.720s 00:31:01.615 user 0m38.606s 00:31:01.615 sys 0m0.908s 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.615 13:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:01.615 ************************************ 00:31:01.615 END TEST spdkcli_nvmf_tcp 00:31:01.615 ************************************ 00:31:01.615 13:24:49 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:01.615 13:24:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:01.615 13:24:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.615 13:24:49 -- common/autotest_common.sh@10 -- # set +x 00:31:01.615 ************************************ 00:31:01.615 START TEST nvmf_identify_passthru 00:31:01.615 ************************************ 00:31:01.615 13:24:49 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:01.615 * Looking for test storage... 00:31:01.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:01.615 13:24:49 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:01.615 13:24:49 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:31:01.615 13:24:49 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:01.615 13:24:49 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:01.615 13:24:49 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.615 13:24:49 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:01.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.615 --rc genhtml_branch_coverage=1 00:31:01.615 --rc genhtml_function_coverage=1 00:31:01.615 --rc genhtml_legend=1 00:31:01.615 --rc geninfo_all_blocks=1 00:31:01.615 --rc geninfo_unexecuted_blocks=1 00:31:01.615 00:31:01.615 ' 00:31:01.615 13:24:49 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:01.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.615 --rc genhtml_branch_coverage=1 00:31:01.615 --rc genhtml_function_coverage=1 00:31:01.615 --rc genhtml_legend=1 00:31:01.615 --rc geninfo_all_blocks=1 00:31:01.615 --rc geninfo_unexecuted_blocks=1 00:31:01.615 00:31:01.615 ' 00:31:01.615 13:24:49 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:01.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.615 --rc genhtml_branch_coverage=1 00:31:01.615 --rc genhtml_function_coverage=1 00:31:01.615 --rc genhtml_legend=1 00:31:01.615 --rc geninfo_all_blocks=1 00:31:01.615 --rc geninfo_unexecuted_blocks=1 00:31:01.615 00:31:01.615 ' 00:31:01.615 13:24:49 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:01.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.615 --rc genhtml_branch_coverage=1 00:31:01.615 --rc genhtml_function_coverage=1 00:31:01.615 --rc genhtml_legend=1 00:31:01.615 --rc geninfo_all_blocks=1 00:31:01.615 --rc geninfo_unexecuted_blocks=1 00:31:01.615 00:31:01.615 ' 00:31:01.615 13:24:49 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.615 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.615 13:24:49 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.615 13:24:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.615 13:24:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.616 13:24:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.616 13:24:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:01.616 13:24:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:01.616 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:01.616 13:24:49 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:01.616 13:24:49 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:01.616 13:24:49 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.616 13:24:49 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.616 13:24:49 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.616 13:24:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.616 13:24:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.616 13:24:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.616 13:24:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:01.616 13:24:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.616 13:24:49 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.616 13:24:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:01.616 13:24:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:01.616 Cannot find device "nvmf_init_br" 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:01.616 Cannot find device "nvmf_init_br2" 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:01.616 Cannot find device "nvmf_tgt_br" 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:01.616 Cannot find device "nvmf_tgt_br2" 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:01.616 Cannot find device "nvmf_init_br" 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:01.616 Cannot find device "nvmf_init_br2" 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:01.616 Cannot find device "nvmf_tgt_br" 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:01.616 Cannot find device "nvmf_tgt_br2" 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:01.616 Cannot find device "nvmf_br" 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:01.616 Cannot find device "nvmf_init_if" 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:01.616 Cannot find device "nvmf_init_if2" 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:01.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:01.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:01.616 13:24:49 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:01.616 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:01.616 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:01.616 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:01.616 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:01.616 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:01.616 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:01.616 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:01.616 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:01.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:01.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:31:01.617 00:31:01.617 --- 10.0.0.3 ping statistics --- 00:31:01.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.617 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:01.617 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:01.617 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:31:01.617 00:31:01.617 --- 10.0.0.4 ping statistics --- 00:31:01.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.617 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:01.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:31:01.617 00:31:01.617 --- 10.0.0.1 ping statistics --- 00:31:01.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.617 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:01.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:31:01.617 00:31:01.617 --- 10.0.0.2 ping statistics --- 00:31:01.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.617 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:01.617 13:24:50 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:01.617 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:01.617 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:31:01.617 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:31:01.617 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:31:01.617 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:31:01.617 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:31:01.617 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:01.617 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:01.876 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:31:01.876 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:31:01.876 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:01.876 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:02.134 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:31:02.134 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:02.134 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:02.134 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.134 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:02.134 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:02.134 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.134 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=108509 00:31:02.134 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:02.134 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:02.134 13:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 108509 00:31:02.134 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 108509 ']' 00:31:02.134 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.134 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.134 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.134 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.134 13:24:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.134 [2024-12-06 13:24:50.735894] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:31:02.134 [2024-12-06 13:24:50.736195] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.393 [2024-12-06 13:24:50.888766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:02.393 [2024-12-06 13:24:50.952085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.393 [2024-12-06 13:24:50.952387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.394 [2024-12-06 13:24:50.952633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.394 [2024-12-06 13:24:50.952784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.394 [2024-12-06 13:24:50.952951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.394 [2024-12-06 13:24:50.954340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.394 [2024-12-06 13:24:50.957492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:02.394 [2024-12-06 13:24:50.957670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:02.394 [2024-12-06 13:24:50.957682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.394 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.394 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:31:02.394 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:02.394 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.394 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.651 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.651 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:02.651 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.651 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.651 [2024-12-06 13:24:51.130518] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:02.651 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.651 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:02.651 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.651 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.651 [2024-12-06 13:24:51.144716] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.651 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.651 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:02.652 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:02.652 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.652 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:31:02.652 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.652 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.652 Nvme0n1 00:31:02.652 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.652 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:02.652 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.652 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.652 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.652 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:02.652 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.652 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.652 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.652 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:02.652 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.652 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.652 [2024-12-06 13:24:51.288640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:02.910 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.910 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:02.910 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.910 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.910 [ 00:31:02.910 { 00:31:02.910 "allow_any_host": true, 00:31:02.910 "hosts": [], 00:31:02.910 "listen_addresses": [], 00:31:02.910 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:02.910 "subtype": "Discovery" 00:31:02.910 }, 00:31:02.910 { 00:31:02.910 "allow_any_host": true, 00:31:02.910 "hosts": [], 00:31:02.910 "listen_addresses": [ 00:31:02.910 { 00:31:02.910 "adrfam": "IPv4", 00:31:02.910 "traddr": "10.0.0.3", 00:31:02.910 "trsvcid": "4420", 00:31:02.910 "trtype": "TCP" 00:31:02.910 } 00:31:02.910 ], 00:31:02.910 "max_cntlid": 65519, 00:31:02.910 "max_namespaces": 1, 00:31:02.910 "min_cntlid": 1, 00:31:02.910 "model_number": "SPDK bdev Controller", 00:31:02.910 "namespaces": [ 00:31:02.910 { 00:31:02.910 "bdev_name": "Nvme0n1", 00:31:02.910 "name": "Nvme0n1", 00:31:02.910 "nguid": "1E96D40E7E124E3CA9CB8BAB13DB0D11", 00:31:02.910 "nsid": 1, 00:31:02.910 "uuid": "1e96d40e-7e12-4e3c-a9cb-8bab13db0d11" 00:31:02.910 } 00:31:02.910 ], 00:31:02.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:02.910 "serial_number": "SPDK00000000000001", 00:31:02.910 "subtype": "NVMe" 00:31:02.910 } 00:31:02.910 ] 00:31:02.910 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.910 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:02.910 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:02.910 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:03.169 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:31:03.169 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:03.169 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:03.169 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:03.427 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:31:03.427 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:31:03.427 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:31:03.427 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.427 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.427 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:03.427 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.427 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:03.427 13:24:51 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:03.427 13:24:51 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:03.427 13:24:51 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:31:03.427 13:24:51 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:03.427 13:24:51 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:31:03.427 13:24:51 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:03.427 13:24:51 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:03.427 rmmod nvme_tcp 00:31:03.427 rmmod nvme_fabrics 00:31:03.427 rmmod nvme_keyring 00:31:03.427 13:24:51 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:03.427 13:24:51 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:31:03.427 13:24:51 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:31:03.427 13:24:51 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 108509 ']' 00:31:03.427 13:24:51 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 108509 00:31:03.427 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 108509 ']' 00:31:03.427 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 108509 00:31:03.427 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:31:03.427 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.427 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108509 00:31:03.427 killing process with pid 108509 00:31:03.427 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:03.427 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:03.427 13:24:51 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108509' 00:31:03.427 13:24:52 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 108509 00:31:03.427 13:24:52 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 108509 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:03.685 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:03.686 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:03.686 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:03.686 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:03.686 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:03.686 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:03.686 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:03.944 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:03.944 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:03.944 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:03.944 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.944 13:24:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:03.944 13:24:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.944 13:24:52 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:31:03.944 00:31:03.944 real 0m2.852s 00:31:03.944 user 0m5.385s 00:31:03.944 sys 0m0.928s 00:31:03.944 13:24:52 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.944 ************************************ 00:31:03.944 13:24:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:03.944 END TEST nvmf_identify_passthru 00:31:03.944 ************************************ 00:31:03.944 13:24:52 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:31:03.944 13:24:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:03.944 13:24:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:03.944 13:24:52 -- common/autotest_common.sh@10 -- # set +x 00:31:03.944 ************************************ 00:31:03.944 START TEST nvmf_dif 00:31:03.944 ************************************ 00:31:03.944 13:24:52 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:31:03.944 * Looking for test storage... 00:31:03.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:03.944 13:24:52 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:03.944 13:24:52 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:03.944 13:24:52 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:31:04.203 13:24:52 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:04.203 13:24:52 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:31:04.203 13:24:52 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:04.203 13:24:52 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:04.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.203 --rc genhtml_branch_coverage=1 00:31:04.203 --rc genhtml_function_coverage=1 00:31:04.203 --rc genhtml_legend=1 00:31:04.203 --rc geninfo_all_blocks=1 00:31:04.203 --rc geninfo_unexecuted_blocks=1 00:31:04.203 00:31:04.203 ' 00:31:04.203 13:24:52 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:04.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.203 --rc genhtml_branch_coverage=1 00:31:04.203 --rc genhtml_function_coverage=1 00:31:04.203 --rc genhtml_legend=1 00:31:04.203 --rc geninfo_all_blocks=1 00:31:04.203 --rc geninfo_unexecuted_blocks=1 00:31:04.203 00:31:04.203 ' 00:31:04.203 13:24:52 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:04.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.203 --rc genhtml_branch_coverage=1 00:31:04.203 --rc genhtml_function_coverage=1 00:31:04.203 --rc genhtml_legend=1 00:31:04.203 --rc geninfo_all_blocks=1 00:31:04.203 --rc geninfo_unexecuted_blocks=1 00:31:04.203 00:31:04.203 ' 00:31:04.203 13:24:52 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:04.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.203 --rc genhtml_branch_coverage=1 00:31:04.203 --rc genhtml_function_coverage=1 00:31:04.203 --rc genhtml_legend=1 00:31:04.203 --rc geninfo_all_blocks=1 00:31:04.203 --rc geninfo_unexecuted_blocks=1 00:31:04.203 00:31:04.203 ' 00:31:04.204 13:24:52 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:04.204 13:24:52 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:31:04.204 13:24:52 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.204 13:24:52 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.204 13:24:52 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.204 13:24:52 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.204 13:24:52 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.204 13:24:52 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.204 13:24:52 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:04.204 13:24:52 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:04.204 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:04.204 13:24:52 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:04.204 13:24:52 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:04.204 13:24:52 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:04.204 13:24:52 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:04.204 13:24:52 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.204 13:24:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:04.204 13:24:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:04.204 Cannot find device "nvmf_init_br" 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@162 -- # true 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:04.204 Cannot find device "nvmf_init_br2" 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@163 -- # true 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:04.204 Cannot find device "nvmf_tgt_br" 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@164 -- # true 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:04.204 Cannot find device "nvmf_tgt_br2" 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@165 -- # true 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:04.204 Cannot find device "nvmf_init_br" 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@166 -- # true 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:04.204 Cannot find device "nvmf_init_br2" 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@167 -- # true 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:04.204 Cannot find device "nvmf_tgt_br" 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@168 -- # true 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:04.204 Cannot find device "nvmf_tgt_br2" 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@169 -- # true 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:04.204 Cannot find device "nvmf_br" 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@170 -- # true 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:04.204 Cannot find device "nvmf_init_if" 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@171 -- # true 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:04.204 Cannot find device "nvmf_init_if2" 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@172 -- # true 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:04.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@173 -- # true 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:04.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@174 -- # true 00:31:04.204 13:24:52 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:04.463 13:24:52 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:04.463 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:04.463 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:31:04.463 00:31:04.463 --- 10.0.0.3 ping statistics --- 00:31:04.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.463 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:04.463 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:04.463 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:31:04.463 00:31:04.463 --- 10.0.0.4 ping statistics --- 00:31:04.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.463 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:04.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:31:04.463 00:31:04.463 --- 10.0.0.1 ping statistics --- 00:31:04.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.463 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:04.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:31:04.463 00:31:04.463 --- 10.0.0.2 ping statistics --- 00:31:04.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.463 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:04.463 13:24:53 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:05.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:05.031 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:31:05.031 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:31:05.031 13:24:53 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:05.031 13:24:53 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:05.031 13:24:53 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:05.031 13:24:53 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:05.031 13:24:53 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:05.031 13:24:53 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:05.031 13:24:53 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:05.031 13:24:53 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:05.031 13:24:53 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:05.031 13:24:53 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:05.031 13:24:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:05.031 13:24:53 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=108895 00:31:05.031 13:24:53 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:05.031 13:24:53 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 108895 00:31:05.031 13:24:53 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 108895 ']' 00:31:05.031 13:24:53 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.031 13:24:53 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:05.031 13:24:53 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.031 13:24:53 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:05.031 13:24:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:05.031 [2024-12-06 13:24:53.554650] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:31:05.031 [2024-12-06 13:24:53.554986] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:05.290 [2024-12-06 13:24:53.711805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.290 [2024-12-06 13:24:53.777125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:05.290 [2024-12-06 13:24:53.777192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:05.290 [2024-12-06 13:24:53.777207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:05.290 [2024-12-06 13:24:53.777218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:05.290 [2024-12-06 13:24:53.777228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:05.290 [2024-12-06 13:24:53.777736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.290 13:24:53 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:05.290 13:24:53 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:31:05.290 13:24:53 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:05.290 13:24:53 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:05.290 13:24:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:05.549 13:24:53 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.549 13:24:53 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:05.549 13:24:53 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:05.549 13:24:53 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.549 13:24:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:05.549 [2024-12-06 13:24:53.965276] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.549 13:24:53 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.549 13:24:53 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:05.549 13:24:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:05.549 13:24:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:05.549 13:24:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:05.549 ************************************ 00:31:05.549 START TEST fio_dif_1_default 00:31:05.549 ************************************ 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:05.549 bdev_null0 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.549 13:24:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:05.549 [2024-12-06 13:24:54.009395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:05.549 { 00:31:05.549 "params": { 00:31:05.549 "name": "Nvme$subsystem", 00:31:05.549 "trtype": "$TEST_TRANSPORT", 00:31:05.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.549 "adrfam": "ipv4", 00:31:05.549 "trsvcid": "$NVMF_PORT", 00:31:05.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.549 "hdgst": ${hdgst:-false}, 00:31:05.549 "ddgst": ${ddgst:-false} 00:31:05.549 }, 00:31:05.549 "method": "bdev_nvme_attach_controller" 00:31:05.549 } 00:31:05.549 EOF 00:31:05.549 )") 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:31:05.549 13:24:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:05.549 "params": { 00:31:05.549 "name": "Nvme0", 00:31:05.549 "trtype": "tcp", 00:31:05.549 "traddr": "10.0.0.3", 00:31:05.549 "adrfam": "ipv4", 00:31:05.549 "trsvcid": "4420", 00:31:05.549 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:05.549 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:05.550 "hdgst": false, 00:31:05.550 "ddgst": false 00:31:05.550 }, 00:31:05.550 "method": "bdev_nvme_attach_controller" 00:31:05.550 }' 00:31:05.550 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:05.550 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:05.550 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.550 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:05.550 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:05.550 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:05.550 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:05.550 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:05.550 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:05.550 13:24:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.808 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:05.808 fio-3.35 00:31:05.808 Starting 1 thread 00:31:18.030 00:31:18.030 filename0: (groupid=0, jobs=1): err= 0: pid=108966: Fri Dec 6 13:25:04 2024 00:31:18.030 read: IOPS=367, BW=1471KiB/s (1506kB/s)(14.4MiB/10019msec) 00:31:18.030 slat (nsec): min=5966, max=58469, avg=8889.10, stdev=4299.69 00:31:18.030 clat (usec): min=360, max=41608, avg=10851.32, stdev=17666.59 00:31:18.030 lat (usec): min=366, max=41618, avg=10860.21, stdev=17666.78 00:31:18.030 clat percentiles (usec): 00:31:18.030 | 1.00th=[ 396], 5.00th=[ 416], 10.00th=[ 433], 20.00th=[ 457], 00:31:18.030 | 30.00th=[ 474], 40.00th=[ 486], 50.00th=[ 498], 60.00th=[ 515], 00:31:18.030 | 70.00th=[ 545], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:31:18.030 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:18.030 | 99.99th=[41681] 00:31:18.030 bw ( KiB/s): min= 864, max= 2784, per=100.00%, avg=1472.00, stdev=417.49, samples=20 00:31:18.030 iops : min= 216, max= 696, avg=368.00, stdev=104.37, samples=20 00:31:18.030 lat (usec) : 500=50.60%, 750=23.67% 00:31:18.030 lat (msec) : 4=0.11%, 50=25.62% 00:31:18.030 cpu : usr=91.89%, sys=7.59%, ctx=20, majf=0, minf=9 00:31:18.030 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:18.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.030 issued rwts: total=3684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.030 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:18.030 00:31:18.030 Run status group 0 (all jobs): 00:31:18.030 READ: bw=1471KiB/s (1506kB/s), 1471KiB/s-1471KiB/s (1506kB/s-1506kB/s), io=14.4MiB (15.1MB), run=10019-10019msec 00:31:18.030 13:25:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:18.030 13:25:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:18.030 13:25:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:18.030 13:25:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:18.030 13:25:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:18.030 13:25:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:18.030 13:25:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.030 13:25:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:18.030 13:25:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.030 13:25:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:18.031 ************************************ 00:31:18.031 END TEST fio_dif_1_default 00:31:18.031 ************************************ 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.031 00:31:18.031 real 0m11.127s 00:31:18.031 user 0m9.942s 00:31:18.031 sys 0m1.047s 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:18.031 13:25:05 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:18.031 13:25:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:18.031 13:25:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:18.031 13:25:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:18.031 ************************************ 00:31:18.031 START TEST fio_dif_1_multi_subsystems 00:31:18.031 ************************************ 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:18.031 bdev_null0 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:18.031 [2024-12-06 13:25:05.194311] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:18.031 bdev_null1 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:18.031 { 00:31:18.031 "params": { 00:31:18.031 "name": "Nvme$subsystem", 00:31:18.031 "trtype": "$TEST_TRANSPORT", 00:31:18.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:18.031 "adrfam": "ipv4", 00:31:18.031 "trsvcid": "$NVMF_PORT", 00:31:18.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:18.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:18.031 "hdgst": ${hdgst:-false}, 00:31:18.031 "ddgst": ${ddgst:-false} 00:31:18.031 }, 00:31:18.031 "method": "bdev_nvme_attach_controller" 00:31:18.031 } 00:31:18.031 EOF 00:31:18.031 )") 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:18.031 { 00:31:18.031 "params": { 00:31:18.031 "name": "Nvme$subsystem", 00:31:18.031 "trtype": "$TEST_TRANSPORT", 00:31:18.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:18.031 "adrfam": "ipv4", 00:31:18.031 "trsvcid": "$NVMF_PORT", 00:31:18.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:18.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:18.031 "hdgst": ${hdgst:-false}, 00:31:18.031 "ddgst": ${ddgst:-false} 00:31:18.031 }, 00:31:18.031 "method": "bdev_nvme_attach_controller" 00:31:18.031 } 00:31:18.031 EOF 00:31:18.031 )") 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:31:18.031 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:18.031 "params": { 00:31:18.031 "name": "Nvme0", 00:31:18.031 "trtype": "tcp", 00:31:18.031 "traddr": "10.0.0.3", 00:31:18.031 "adrfam": "ipv4", 00:31:18.031 "trsvcid": "4420", 00:31:18.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:18.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:18.031 "hdgst": false, 00:31:18.031 "ddgst": false 00:31:18.031 }, 00:31:18.031 "method": "bdev_nvme_attach_controller" 00:31:18.031 },{ 00:31:18.031 "params": { 00:31:18.031 "name": "Nvme1", 00:31:18.031 "trtype": "tcp", 00:31:18.032 "traddr": "10.0.0.3", 00:31:18.032 "adrfam": "ipv4", 00:31:18.032 "trsvcid": "4420", 00:31:18.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:18.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:18.032 "hdgst": false, 00:31:18.032 "ddgst": false 00:31:18.032 }, 00:31:18.032 "method": "bdev_nvme_attach_controller" 00:31:18.032 }' 00:31:18.032 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:18.032 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:18.032 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.032 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:18.032 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:18.032 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:18.032 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:18.032 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:18.032 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:18.032 13:25:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.032 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:18.032 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:18.032 fio-3.35 00:31:18.032 Starting 2 threads 00:31:28.058 00:31:28.058 filename0: (groupid=0, jobs=1): err= 0: pid=109125: Fri Dec 6 13:25:16 2024 00:31:28.058 read: IOPS=160, BW=642KiB/s (658kB/s)(6432KiB/10014msec) 00:31:28.058 slat (nsec): min=6819, max=45403, avg=9471.85, stdev=4002.10 00:31:28.058 clat (usec): min=411, max=41957, avg=24880.32, stdev=19827.61 00:31:28.058 lat (usec): min=418, max=41968, avg=24889.80, stdev=19827.58 00:31:28.058 clat percentiles (usec): 00:31:28.058 | 1.00th=[ 433], 5.00th=[ 453], 10.00th=[ 465], 20.00th=[ 486], 00:31:28.058 | 30.00th=[ 510], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:31:28.058 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:28.058 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:31:28.058 | 99.99th=[42206] 00:31:28.058 bw ( KiB/s): min= 416, max= 1024, per=52.08%, avg=641.60, stdev=145.53, samples=20 00:31:28.058 iops : min= 104, max= 256, avg=160.40, stdev=36.38, samples=20 00:31:28.058 lat (usec) : 500=25.87%, 750=13.43%, 1000=0.25% 00:31:28.058 lat (msec) : 4=0.25%, 50=60.20% 00:31:28.058 cpu : usr=95.44%, sys=4.01%, ctx=109, majf=0, minf=0 00:31:28.058 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.058 issued rwts: total=1608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.058 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:28.058 filename1: (groupid=0, jobs=1): err= 0: pid=109126: Fri Dec 6 13:25:16 2024 00:31:28.058 read: IOPS=147, BW=590KiB/s (604kB/s)(5920KiB/10035msec) 00:31:28.058 slat (nsec): min=4343, max=30626, avg=9034.28, stdev=3260.00 00:31:28.058 clat (usec): min=407, max=42472, avg=27091.62, stdev=19231.14 00:31:28.058 lat (usec): min=414, max=42482, avg=27100.65, stdev=19231.15 00:31:28.058 clat percentiles (usec): 00:31:28.058 | 1.00th=[ 429], 5.00th=[ 449], 10.00th=[ 465], 20.00th=[ 490], 00:31:28.058 | 30.00th=[ 523], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:31:28.058 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:28.058 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:31:28.058 | 99.99th=[42730] 00:31:28.058 bw ( KiB/s): min= 448, max= 768, per=47.93%, avg=590.30, stdev=78.00, samples=20 00:31:28.058 iops : min= 112, max= 192, avg=147.55, stdev=19.49, samples=20 00:31:28.058 lat (usec) : 500=23.58%, 750=10.47% 00:31:28.058 lat (msec) : 4=0.27%, 50=65.68% 00:31:28.058 cpu : usr=95.28%, sys=4.32%, ctx=10, majf=0, minf=0 00:31:28.058 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.058 issued rwts: total=1480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.058 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:28.058 00:31:28.058 Run status group 0 (all jobs): 00:31:28.058 READ: bw=1231KiB/s (1260kB/s), 590KiB/s-642KiB/s (604kB/s-658kB/s), io=12.1MiB (12.6MB), run=10014-10035msec 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:28.058 ************************************ 00:31:28.058 END TEST fio_dif_1_multi_subsystems 00:31:28.058 ************************************ 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.058 00:31:28.058 real 0m11.312s 00:31:28.058 user 0m20.009s 00:31:28.058 sys 0m1.143s 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:28.058 13:25:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:28.059 13:25:16 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:28.059 13:25:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:28.059 13:25:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:28.059 13:25:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:28.059 ************************************ 00:31:28.059 START TEST fio_dif_rand_params 00:31:28.059 ************************************ 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.059 bdev_null0 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.059 [2024-12-06 13:25:16.553522] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.059 { 00:31:28.059 "params": { 00:31:28.059 "name": "Nvme$subsystem", 00:31:28.059 "trtype": "$TEST_TRANSPORT", 00:31:28.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.059 "adrfam": "ipv4", 00:31:28.059 "trsvcid": "$NVMF_PORT", 00:31:28.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.059 "hdgst": ${hdgst:-false}, 00:31:28.059 "ddgst": ${ddgst:-false} 00:31:28.059 }, 00:31:28.059 "method": "bdev_nvme_attach_controller" 00:31:28.059 } 00:31:28.059 EOF 00:31:28.059 )") 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:28.059 "params": { 00:31:28.059 "name": "Nvme0", 00:31:28.059 "trtype": "tcp", 00:31:28.059 "traddr": "10.0.0.3", 00:31:28.059 "adrfam": "ipv4", 00:31:28.059 "trsvcid": "4420", 00:31:28.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:28.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:28.059 "hdgst": false, 00:31:28.059 "ddgst": false 00:31:28.059 }, 00:31:28.059 "method": "bdev_nvme_attach_controller" 00:31:28.059 }' 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:28.059 13:25:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:28.319 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:28.319 ... 00:31:28.319 fio-3.35 00:31:28.319 Starting 3 threads 00:31:34.884 00:31:34.884 filename0: (groupid=0, jobs=1): err= 0: pid=109278: Fri Dec 6 13:25:22 2024 00:31:34.884 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(149MiB/5003msec) 00:31:34.884 slat (nsec): min=4283, max=52485, avg=11764.98, stdev=4510.54 00:31:34.884 clat (usec): min=5546, max=54733, avg=12614.58, stdev=5341.61 00:31:34.884 lat (usec): min=5557, max=54745, avg=12626.34, stdev=5341.69 00:31:34.884 clat percentiles (usec): 00:31:34.884 | 1.00th=[ 6521], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[11076], 00:31:34.884 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:31:34.884 | 70.00th=[13042], 80.00th=[13435], 90.00th=[13960], 95.00th=[14484], 00:31:34.884 | 99.00th=[52167], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:31:34.884 | 99.99th=[54789] 00:31:34.884 bw ( KiB/s): min=27136, max=34560, per=33.22%, avg=30208.00, stdev=2531.04, samples=9 00:31:34.884 iops : min= 212, max= 270, avg=236.00, stdev=19.77, samples=9 00:31:34.884 lat (msec) : 10=13.55%, 20=84.93%, 50=0.08%, 100=1.43% 00:31:34.884 cpu : usr=92.74%, sys=5.86%, ctx=10, majf=0, minf=0 00:31:34.884 IO depths : 1=7.3%, 2=92.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.884 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.884 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:34.884 filename0: (groupid=0, jobs=1): err= 0: pid=109279: Fri Dec 6 13:25:22 2024 00:31:34.885 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(136MiB/5004msec) 00:31:34.885 slat (nsec): min=6642, max=43837, avg=10276.29, stdev=4179.83 00:31:34.885 clat (usec): min=4224, max=17519, avg=13759.42, stdev=2851.55 00:31:34.885 lat (usec): min=4231, max=17536, avg=13769.70, stdev=2851.41 00:31:34.885 clat percentiles (usec): 00:31:34.885 | 1.00th=[ 4293], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[11469], 00:31:34.885 | 30.00th=[13960], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:31:34.885 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16057], 95.00th=[16581], 00:31:34.885 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17433], 99.95th=[17433], 00:31:34.885 | 99.99th=[17433] 00:31:34.885 bw ( KiB/s): min=25344, max=32256, per=30.69%, avg=27904.00, stdev=2688.00, samples=9 00:31:34.885 iops : min= 198, max= 252, avg=218.00, stdev=21.00, samples=9 00:31:34.885 lat (msec) : 10=14.05%, 20=85.95% 00:31:34.885 cpu : usr=92.78%, sys=5.94%, ctx=13, majf=0, minf=0 00:31:34.885 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.885 issued rwts: total=1089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.885 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:34.885 filename0: (groupid=0, jobs=1): err= 0: pid=109280: Fri Dec 6 13:25:22 2024 00:31:34.885 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(160MiB/5004msec) 00:31:34.885 slat (nsec): min=6988, max=47510, avg=12224.07, stdev=3868.45 00:31:34.885 clat (usec): min=5709, max=53397, avg=11727.93, stdev=6490.76 00:31:34.885 lat (usec): min=5720, max=53408, avg=11740.16, stdev=6490.65 00:31:34.885 clat percentiles (usec): 00:31:34.885 | 1.00th=[ 7308], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[ 9896], 00:31:34.885 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:31:34.885 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12125], 95.00th=[12649], 00:31:34.885 | 99.00th=[51643], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:31:34.885 | 99.99th=[53216] 00:31:34.885 bw ( KiB/s): min=26368, max=36864, per=36.03%, avg=32768.00, stdev=3895.07, samples=9 00:31:34.885 iops : min= 206, max= 288, avg=256.00, stdev=30.43, samples=9 00:31:34.885 lat (msec) : 10=21.13%, 20=76.29%, 50=0.55%, 100=2.03% 00:31:34.885 cpu : usr=91.80%, sys=6.46%, ctx=53, majf=0, minf=0 00:31:34.885 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.885 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.885 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:34.885 00:31:34.885 Run status group 0 (all jobs): 00:31:34.885 READ: bw=88.8MiB/s (93.1MB/s), 27.2MiB/s-31.9MiB/s (28.5MB/s-33.5MB/s), io=444MiB (466MB), run=5003-5004msec 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 bdev_null0 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 [2024-12-06 13:25:22.647019] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 bdev_null1 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 bdev_null2 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:34.885 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:34.886 { 00:31:34.886 "params": { 00:31:34.886 "name": "Nvme$subsystem", 00:31:34.886 "trtype": "$TEST_TRANSPORT", 00:31:34.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:34.886 "adrfam": "ipv4", 00:31:34.886 "trsvcid": "$NVMF_PORT", 00:31:34.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:34.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:34.886 "hdgst": ${hdgst:-false}, 00:31:34.886 "ddgst": ${ddgst:-false} 00:31:34.886 }, 00:31:34.886 "method": "bdev_nvme_attach_controller" 00:31:34.886 } 00:31:34.886 EOF 00:31:34.886 )") 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:34.886 { 00:31:34.886 "params": { 00:31:34.886 "name": "Nvme$subsystem", 00:31:34.886 "trtype": "$TEST_TRANSPORT", 00:31:34.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:34.886 "adrfam": "ipv4", 00:31:34.886 "trsvcid": "$NVMF_PORT", 00:31:34.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:34.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:34.886 "hdgst": ${hdgst:-false}, 00:31:34.886 "ddgst": ${ddgst:-false} 00:31:34.886 }, 00:31:34.886 "method": "bdev_nvme_attach_controller" 00:31:34.886 } 00:31:34.886 EOF 00:31:34.886 )") 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:34.886 { 00:31:34.886 "params": { 00:31:34.886 "name": "Nvme$subsystem", 00:31:34.886 "trtype": "$TEST_TRANSPORT", 00:31:34.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:34.886 "adrfam": "ipv4", 00:31:34.886 "trsvcid": "$NVMF_PORT", 00:31:34.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:34.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:34.886 "hdgst": ${hdgst:-false}, 00:31:34.886 "ddgst": ${ddgst:-false} 00:31:34.886 }, 00:31:34.886 "method": "bdev_nvme_attach_controller" 00:31:34.886 } 00:31:34.886 EOF 00:31:34.886 )") 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:34.886 "params": { 00:31:34.886 "name": "Nvme0", 00:31:34.886 "trtype": "tcp", 00:31:34.886 "traddr": "10.0.0.3", 00:31:34.886 "adrfam": "ipv4", 00:31:34.886 "trsvcid": "4420", 00:31:34.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:34.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:34.886 "hdgst": false, 00:31:34.886 "ddgst": false 00:31:34.886 }, 00:31:34.886 "method": "bdev_nvme_attach_controller" 00:31:34.886 },{ 00:31:34.886 "params": { 00:31:34.886 "name": "Nvme1", 00:31:34.886 "trtype": "tcp", 00:31:34.886 "traddr": "10.0.0.3", 00:31:34.886 "adrfam": "ipv4", 00:31:34.886 "trsvcid": "4420", 00:31:34.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:34.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:34.886 "hdgst": false, 00:31:34.886 "ddgst": false 00:31:34.886 }, 00:31:34.886 "method": "bdev_nvme_attach_controller" 00:31:34.886 },{ 00:31:34.886 "params": { 00:31:34.886 "name": "Nvme2", 00:31:34.886 "trtype": "tcp", 00:31:34.886 "traddr": "10.0.0.3", 00:31:34.886 "adrfam": "ipv4", 00:31:34.886 "trsvcid": "4420", 00:31:34.886 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:34.886 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:34.886 "hdgst": false, 00:31:34.886 "ddgst": false 00:31:34.886 }, 00:31:34.886 "method": "bdev_nvme_attach_controller" 00:31:34.886 }' 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:34.886 13:25:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.886 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:34.886 ... 00:31:34.886 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:34.886 ... 00:31:34.886 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:34.886 ... 00:31:34.886 fio-3.35 00:31:34.886 Starting 24 threads 00:31:47.200 00:31:47.200 filename0: (groupid=0, jobs=1): err= 0: pid=109375: Fri Dec 6 13:25:33 2024 00:31:47.200 read: IOPS=234, BW=938KiB/s (960kB/s)(9408KiB/10031msec) 00:31:47.200 slat (usec): min=6, max=8033, avg=24.19, stdev=289.18 00:31:47.200 clat (msec): min=9, max=147, avg=67.96, stdev=23.34 00:31:47.200 lat (msec): min=9, max=147, avg=67.98, stdev=23.34 00:31:47.200 clat percentiles (msec): 00:31:47.200 | 1.00th=[ 16], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:31:47.200 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 72], 00:31:47.200 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 108], 00:31:47.200 | 99.00th=[ 128], 99.50th=[ 128], 99.90th=[ 148], 99.95th=[ 148], 00:31:47.200 | 99.99th=[ 148] 00:31:47.200 bw ( KiB/s): min= 526, max= 1536, per=4.59%, avg=938.50, stdev=204.45, samples=20 00:31:47.200 iops : min= 131, max= 384, avg=234.55, stdev=51.22, samples=20 00:31:47.200 lat (msec) : 10=0.09%, 20=2.21%, 50=24.62%, 100=63.90%, 250=9.18% 00:31:47.200 cpu : usr=37.89%, sys=0.85%, ctx=1086, majf=0, minf=9 00:31:47.200 IO depths : 1=0.7%, 2=1.9%, 4=8.7%, 8=75.7%, 16=13.0%, 32=0.0%, >=64=0.0% 00:31:47.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.200 complete : 0=0.0%, 4=89.9%, 8=5.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.200 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.200 filename0: (groupid=0, jobs=1): err= 0: pid=109376: Fri Dec 6 13:25:33 2024 00:31:47.200 read: IOPS=207, BW=828KiB/s (848kB/s)(8304KiB/10027msec) 00:31:47.200 slat (usec): min=5, max=8034, avg=22.67, stdev=304.81 00:31:47.200 clat (msec): min=22, max=167, avg=77.02, stdev=23.69 00:31:47.200 lat (msec): min=22, max=167, avg=77.04, stdev=23.69 00:31:47.200 clat percentiles (msec): 00:31:47.200 | 1.00th=[ 24], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:31:47.200 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:31:47.200 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 120], 00:31:47.200 | 99.00th=[ 138], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 167], 00:31:47.200 | 99.99th=[ 167] 00:31:47.200 bw ( KiB/s): min= 630, max= 1248, per=4.05%, avg=827.80, stdev=150.82, samples=20 00:31:47.200 iops : min= 157, max= 312, avg=206.90, stdev=37.75, samples=20 00:31:47.200 lat (msec) : 50=16.09%, 100=70.76%, 250=13.15% 00:31:47.200 cpu : usr=32.58%, sys=0.58%, ctx=883, majf=0, minf=9 00:31:47.200 IO depths : 1=1.8%, 2=3.7%, 4=12.4%, 8=70.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:31:47.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.200 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.200 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.200 filename0: (groupid=0, jobs=1): err= 0: pid=109377: Fri Dec 6 13:25:33 2024 00:31:47.201 read: IOPS=206, BW=825KiB/s (845kB/s)(8268KiB/10025msec) 00:31:47.201 slat (usec): min=6, max=4030, avg=16.31, stdev=125.04 00:31:47.201 clat (msec): min=27, max=164, avg=77.38, stdev=24.71 00:31:47.201 lat (msec): min=27, max=164, avg=77.39, stdev=24.71 00:31:47.201 clat percentiles (msec): 00:31:47.201 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:31:47.201 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:31:47.201 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 112], 95.00th=[ 121], 00:31:47.201 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 165], 99.95th=[ 165], 00:31:47.201 | 99.99th=[ 165] 00:31:47.201 bw ( KiB/s): min= 512, max= 1104, per=4.04%, avg=824.30, stdev=173.22, samples=20 00:31:47.201 iops : min= 128, max= 276, avg=206.05, stdev=43.34, samples=20 00:31:47.201 lat (msec) : 50=17.71%, 100=66.42%, 250=15.87% 00:31:47.201 cpu : usr=40.10%, sys=0.69%, ctx=1160, majf=0, minf=9 00:31:47.201 IO depths : 1=1.9%, 2=4.2%, 4=12.9%, 8=69.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:31:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.201 complete : 0=0.0%, 4=90.6%, 8=4.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.201 issued rwts: total=2067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.201 filename0: (groupid=0, jobs=1): err= 0: pid=109378: Fri Dec 6 13:25:33 2024 00:31:47.201 read: IOPS=237, BW=951KiB/s (974kB/s)(9544KiB/10031msec) 00:31:47.201 slat (usec): min=5, max=9030, avg=17.51, stdev=202.30 00:31:47.201 clat (msec): min=26, max=136, avg=67.11, stdev=18.09 00:31:47.201 lat (msec): min=26, max=136, avg=67.12, stdev=18.09 00:31:47.201 clat percentiles (msec): 00:31:47.201 | 1.00th=[ 32], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 51], 00:31:47.201 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 67], 60.00th=[ 72], 00:31:47.201 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 92], 95.00th=[ 99], 00:31:47.201 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 138], 99.95th=[ 138], 00:31:47.201 | 99.99th=[ 138] 00:31:47.201 bw ( KiB/s): min= 688, max= 1152, per=4.63%, avg=945.47, stdev=143.14, samples=19 00:31:47.201 iops : min= 172, max= 288, avg=236.37, stdev=35.79, samples=19 00:31:47.201 lat (msec) : 50=17.73%, 100=78.83%, 250=3.44% 00:31:47.201 cpu : usr=37.23%, sys=0.71%, ctx=1173, majf=0, minf=9 00:31:47.201 IO depths : 1=0.3%, 2=0.8%, 4=6.5%, 8=79.0%, 16=13.5%, 32=0.0%, >=64=0.0% 00:31:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.201 complete : 0=0.0%, 4=89.2%, 8=6.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.201 issued rwts: total=2386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.201 filename0: (groupid=0, jobs=1): err= 0: pid=109379: Fri Dec 6 13:25:33 2024 00:31:47.201 read: IOPS=218, BW=873KiB/s (894kB/s)(8740KiB/10015msec) 00:31:47.201 slat (usec): min=4, max=8029, avg=26.59, stdev=310.28 00:31:47.201 clat (msec): min=11, max=192, avg=73.17, stdev=23.76 00:31:47.201 lat (msec): min=11, max=192, avg=73.20, stdev=23.75 00:31:47.201 clat percentiles (msec): 00:31:47.201 | 1.00th=[ 19], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 53], 00:31:47.201 | 30.00th=[ 60], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:31:47.201 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 116], 00:31:47.201 | 99.00th=[ 126], 99.50th=[ 138], 99.90th=[ 192], 99.95th=[ 192], 00:31:47.201 | 99.99th=[ 192] 00:31:47.201 bw ( KiB/s): min= 638, max= 1282, per=4.25%, avg=867.50, stdev=178.79, samples=20 00:31:47.201 iops : min= 159, max= 320, avg=216.80, stdev=44.68, samples=20 00:31:47.201 lat (msec) : 20=1.14%, 50=17.25%, 100=66.91%, 250=14.69% 00:31:47.201 cpu : usr=42.89%, sys=0.94%, ctx=1355, majf=0, minf=9 00:31:47.201 IO depths : 1=1.7%, 2=4.1%, 4=12.5%, 8=70.3%, 16=11.4%, 32=0.0%, >=64=0.0% 00:31:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.201 complete : 0=0.0%, 4=90.8%, 8=4.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.201 issued rwts: total=2185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.201 filename0: (groupid=0, jobs=1): err= 0: pid=109380: Fri Dec 6 13:25:33 2024 00:31:47.201 read: IOPS=223, BW=894KiB/s (915kB/s)(8956KiB/10022msec) 00:31:47.201 slat (usec): min=4, max=7021, avg=16.28, stdev=170.78 00:31:47.201 clat (msec): min=23, max=146, avg=71.43, stdev=24.30 00:31:47.201 lat (msec): min=23, max=146, avg=71.45, stdev=24.30 00:31:47.201 clat percentiles (msec): 00:31:47.201 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 48], 00:31:47.201 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 74], 00:31:47.201 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 105], 95.00th=[ 120], 00:31:47.201 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:31:47.201 | 99.99th=[ 146] 00:31:47.201 bw ( KiB/s): min= 512, max= 1176, per=4.36%, avg=891.21, stdev=190.08, samples=19 00:31:47.201 iops : min= 128, max= 294, avg=222.79, stdev=47.50, samples=19 00:31:47.201 lat (msec) : 50=23.36%, 100=64.85%, 250=11.79% 00:31:47.201 cpu : usr=43.87%, sys=0.96%, ctx=1215, majf=0, minf=9 00:31:47.201 IO depths : 1=1.4%, 2=3.2%, 4=10.5%, 8=72.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:31:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.201 complete : 0=0.0%, 4=90.3%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.201 issued rwts: total=2239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.201 filename0: (groupid=0, jobs=1): err= 0: pid=109381: Fri Dec 6 13:25:33 2024 00:31:47.201 read: IOPS=203, BW=814KiB/s (833kB/s)(8160KiB/10027msec) 00:31:47.201 slat (usec): min=4, max=8073, avg=32.71, stdev=396.40 00:31:47.201 clat (msec): min=21, max=143, avg=78.41, stdev=22.59 00:31:47.201 lat (msec): min=21, max=143, avg=78.45, stdev=22.61 00:31:47.201 clat percentiles (msec): 00:31:47.201 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 61], 00:31:47.201 | 30.00th=[ 69], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 82], 00:31:47.201 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 120], 00:31:47.201 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:31:47.201 | 99.99th=[ 144] 00:31:47.201 bw ( KiB/s): min= 608, max= 1128, per=3.97%, avg=811.32, stdev=144.37, samples=19 00:31:47.201 iops : min= 152, max= 282, avg=202.79, stdev=36.03, samples=19 00:31:47.201 lat (msec) : 50=12.65%, 100=70.83%, 250=16.52% 00:31:47.201 cpu : usr=36.04%, sys=0.82%, ctx=949, majf=0, minf=9 00:31:47.201 IO depths : 1=2.4%, 2=5.1%, 4=14.4%, 8=67.4%, 16=10.8%, 32=0.0%, >=64=0.0% 00:31:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.201 complete : 0=0.0%, 4=91.2%, 8=3.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.201 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.201 filename0: (groupid=0, jobs=1): err= 0: pid=109382: Fri Dec 6 13:25:33 2024 00:31:47.201 read: IOPS=204, BW=816KiB/s (836kB/s)(8172KiB/10011msec) 00:31:47.201 slat (usec): min=5, max=8023, avg=23.88, stdev=306.84 00:31:47.201 clat (msec): min=15, max=167, avg=78.25, stdev=22.48 00:31:47.201 lat (msec): min=15, max=167, avg=78.27, stdev=22.48 00:31:47.201 clat percentiles (msec): 00:31:47.201 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:31:47.201 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:31:47.201 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 121], 00:31:47.201 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 155], 99.95th=[ 155], 00:31:47.201 | 99.99th=[ 167] 00:31:47.201 bw ( KiB/s): min= 560, max= 1024, per=4.01%, avg=818.00, stdev=127.33, samples=19 00:31:47.201 iops : min= 140, max= 256, avg=204.47, stdev=31.86, samples=19 00:31:47.201 lat (msec) : 20=0.49%, 50=11.60%, 100=72.79%, 250=15.12% 00:31:47.201 cpu : usr=35.43%, sys=0.89%, ctx=963, majf=0, minf=9 00:31:47.201 IO depths : 1=2.0%, 2=4.3%, 4=12.6%, 8=70.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:31:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.201 complete : 0=0.0%, 4=90.8%, 8=4.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.201 issued rwts: total=2043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.201 filename1: (groupid=0, jobs=1): err= 0: pid=109383: Fri Dec 6 13:25:33 2024 00:31:47.201 read: IOPS=234, BW=939KiB/s (962kB/s)(9424KiB/10036msec) 00:31:47.201 slat (nsec): min=4955, max=58095, avg=11024.24, stdev=5204.31 00:31:47.201 clat (msec): min=21, max=144, avg=68.00, stdev=19.35 00:31:47.201 lat (msec): min=21, max=144, avg=68.01, stdev=19.35 00:31:47.201 clat percentiles (msec): 00:31:47.201 | 1.00th=[ 25], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 50], 00:31:47.201 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:31:47.201 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 97], 00:31:47.201 | 99.00th=[ 125], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 144], 00:31:47.201 | 99.99th=[ 144] 00:31:47.202 bw ( KiB/s): min= 744, max= 1248, per=4.59%, avg=938.60, stdev=133.03, samples=20 00:31:47.202 iops : min= 186, max= 312, avg=234.60, stdev=33.27, samples=20 00:31:47.202 lat (msec) : 50=21.01%, 100=74.45%, 250=4.54% 00:31:47.202 cpu : usr=44.69%, sys=0.94%, ctx=1048, majf=0, minf=10 00:31:47.202 IO depths : 1=1.2%, 2=2.5%, 4=10.1%, 8=74.1%, 16=12.1%, 32=0.0%, >=64=0.0% 00:31:47.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 issued rwts: total=2356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.202 filename1: (groupid=0, jobs=1): err= 0: pid=109384: Fri Dec 6 13:25:33 2024 00:31:47.202 read: IOPS=183, BW=734KiB/s (751kB/s)(7340KiB/10004msec) 00:31:47.202 slat (usec): min=5, max=6640, avg=16.17, stdev=154.88 00:31:47.202 clat (msec): min=2, max=159, avg=87.14, stdev=26.55 00:31:47.202 lat (msec): min=2, max=159, avg=87.16, stdev=26.55 00:31:47.202 clat percentiles (msec): 00:31:47.202 | 1.00th=[ 5], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 70], 00:31:47.202 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 91], 00:31:47.202 | 70.00th=[ 99], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 138], 00:31:47.202 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 161], 99.95th=[ 161], 00:31:47.202 | 99.99th=[ 161] 00:31:47.202 bw ( KiB/s): min= 512, max= 944, per=3.49%, avg=713.89, stdev=106.87, samples=19 00:31:47.202 iops : min= 128, max= 236, avg=178.47, stdev=26.72, samples=19 00:31:47.202 lat (msec) : 4=0.76%, 10=0.71%, 50=3.60%, 100=66.59%, 250=28.34% 00:31:47.202 cpu : usr=38.76%, sys=0.76%, ctx=1345, majf=0, minf=9 00:31:47.202 IO depths : 1=1.0%, 2=2.6%, 4=9.9%, 8=72.5%, 16=14.0%, 32=0.0%, >=64=0.0% 00:31:47.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 complete : 0=0.0%, 4=90.4%, 8=6.4%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.202 filename1: (groupid=0, jobs=1): err= 0: pid=109385: Fri Dec 6 13:25:33 2024 00:31:47.202 read: IOPS=202, BW=812KiB/s (831kB/s)(8140KiB/10025msec) 00:31:47.202 slat (usec): min=4, max=8041, avg=19.14, stdev=251.44 00:31:47.202 clat (msec): min=31, max=167, avg=78.68, stdev=25.58 00:31:47.202 lat (msec): min=31, max=167, avg=78.70, stdev=25.58 00:31:47.202 clat percentiles (msec): 00:31:47.202 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:31:47.202 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:31:47.202 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 130], 00:31:47.202 | 99.00th=[ 163], 99.50th=[ 165], 99.90th=[ 169], 99.95th=[ 169], 00:31:47.202 | 99.99th=[ 169] 00:31:47.202 bw ( KiB/s): min= 439, max= 1104, per=3.95%, avg=806.47, stdev=184.53, samples=19 00:31:47.202 iops : min= 109, max= 276, avg=201.53, stdev=46.15, samples=19 00:31:47.202 lat (msec) : 50=10.81%, 100=74.45%, 250=14.74% 00:31:47.202 cpu : usr=38.10%, sys=0.79%, ctx=1405, majf=0, minf=9 00:31:47.202 IO depths : 1=1.6%, 2=3.9%, 4=12.3%, 8=70.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:31:47.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 complete : 0=0.0%, 4=90.8%, 8=4.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 issued rwts: total=2035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.202 filename1: (groupid=0, jobs=1): err= 0: pid=109386: Fri Dec 6 13:25:33 2024 00:31:47.202 read: IOPS=267, BW=1072KiB/s (1098kB/s)(10.5MiB/10045msec) 00:31:47.202 slat (usec): min=3, max=4019, avg=13.78, stdev=106.61 00:31:47.202 clat (usec): min=1566, max=166043, avg=59539.49, stdev=26405.09 00:31:47.202 lat (usec): min=1574, max=166057, avg=59553.26, stdev=26405.72 00:31:47.202 clat percentiles (usec): 00:31:47.202 | 1.00th=[ 1663], 5.00th=[ 5538], 10.00th=[ 33424], 20.00th=[ 45351], 00:31:47.202 | 30.00th=[ 48497], 40.00th=[ 52691], 50.00th=[ 55837], 60.00th=[ 62653], 00:31:47.202 | 70.00th=[ 68682], 80.00th=[ 78119], 90.00th=[ 89654], 95.00th=[105382], 00:31:47.202 | 99.00th=[137364], 99.50th=[166724], 99.90th=[166724], 99.95th=[166724], 00:31:47.202 | 99.99th=[166724] 00:31:47.202 bw ( KiB/s): min= 510, max= 2768, per=5.25%, avg=1072.15, stdev=436.45, samples=20 00:31:47.202 iops : min= 127, max= 692, avg=267.95, stdev=109.16, samples=20 00:31:47.202 lat (msec) : 2=2.38%, 4=1.19%, 10=2.38%, 20=2.38%, 50=24.96% 00:31:47.202 lat (msec) : 100=60.25%, 250=6.46% 00:31:47.202 cpu : usr=46.03%, sys=1.04%, ctx=1316, majf=0, minf=0 00:31:47.202 IO depths : 1=1.2%, 2=2.6%, 4=9.9%, 8=74.4%, 16=11.9%, 32=0.0%, >=64=0.0% 00:31:47.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 complete : 0=0.0%, 4=89.9%, 8=5.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 issued rwts: total=2692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.202 filename1: (groupid=0, jobs=1): err= 0: pid=109387: Fri Dec 6 13:25:33 2024 00:31:47.202 read: IOPS=198, BW=794KiB/s (813kB/s)(7940KiB/10003msec) 00:31:47.202 slat (usec): min=5, max=119, avg=13.00, stdev= 8.03 00:31:47.202 clat (msec): min=4, max=158, avg=80.54, stdev=23.42 00:31:47.202 lat (msec): min=4, max=158, avg=80.55, stdev=23.42 00:31:47.202 clat percentiles (msec): 00:31:47.202 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 63], 00:31:47.202 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 82], 00:31:47.202 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 116], 95.00th=[ 123], 00:31:47.202 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 159], 00:31:47.202 | 99.99th=[ 159] 00:31:47.202 bw ( KiB/s): min= 512, max= 1104, per=3.82%, avg=781.68, stdev=124.12, samples=19 00:31:47.202 iops : min= 128, max= 276, avg=195.37, stdev=31.00, samples=19 00:31:47.202 lat (msec) : 10=0.81%, 50=7.36%, 100=73.10%, 250=18.74% 00:31:47.202 cpu : usr=40.59%, sys=0.79%, ctx=1225, majf=0, minf=10 00:31:47.202 IO depths : 1=2.1%, 2=4.5%, 4=12.3%, 8=69.2%, 16=11.9%, 32=0.0%, >=64=0.0% 00:31:47.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 complete : 0=0.0%, 4=91.0%, 8=4.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 issued rwts: total=1985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.202 filename1: (groupid=0, jobs=1): err= 0: pid=109388: Fri Dec 6 13:25:33 2024 00:31:47.202 read: IOPS=183, BW=732KiB/s (750kB/s)(7328KiB/10005msec) 00:31:47.202 slat (usec): min=3, max=8059, avg=24.11, stdev=311.75 00:31:47.202 clat (msec): min=36, max=166, avg=87.24, stdev=24.99 00:31:47.202 lat (msec): min=36, max=166, avg=87.27, stdev=24.98 00:31:47.202 clat percentiles (msec): 00:31:47.202 | 1.00th=[ 39], 5.00th=[ 49], 10.00th=[ 61], 20.00th=[ 71], 00:31:47.202 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 85], 60.00th=[ 93], 00:31:47.202 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 136], 00:31:47.202 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 167], 00:31:47.202 | 99.99th=[ 167] 00:31:47.202 bw ( KiB/s): min= 464, max= 976, per=3.55%, avg=724.11, stdev=123.10, samples=19 00:31:47.202 iops : min= 116, max= 244, avg=181.00, stdev=30.77, samples=19 00:31:47.202 lat (msec) : 50=5.68%, 100=71.51%, 250=22.82% 00:31:47.202 cpu : usr=32.62%, sys=0.62%, ctx=890, majf=0, minf=9 00:31:47.202 IO depths : 1=2.7%, 2=5.6%, 4=15.3%, 8=66.0%, 16=10.3%, 32=0.0%, >=64=0.0% 00:31:47.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 complete : 0=0.0%, 4=91.2%, 8=3.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 issued rwts: total=1832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.202 filename1: (groupid=0, jobs=1): err= 0: pid=109389: Fri Dec 6 13:25:33 2024 00:31:47.202 read: IOPS=229, BW=920KiB/s (942kB/s)(9240KiB/10044msec) 00:31:47.202 slat (usec): min=7, max=8021, avg=16.42, stdev=196.55 00:31:47.202 clat (msec): min=4, max=167, avg=69.33, stdev=24.27 00:31:47.202 lat (msec): min=4, max=167, avg=69.34, stdev=24.27 00:31:47.202 clat percentiles (msec): 00:31:47.202 | 1.00th=[ 8], 5.00th=[ 35], 10.00th=[ 47], 20.00th=[ 48], 00:31:47.202 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 72], 00:31:47.202 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 111], 00:31:47.202 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 157], 99.95th=[ 157], 00:31:47.202 | 99.99th=[ 169] 00:31:47.202 bw ( KiB/s): min= 640, max= 1630, per=4.50%, avg=919.05, stdev=225.86, samples=20 00:31:47.202 iops : min= 160, max= 407, avg=229.70, stdev=56.39, samples=20 00:31:47.202 lat (msec) : 10=1.39%, 20=2.08%, 50=20.26%, 100=67.10%, 250=9.18% 00:31:47.202 cpu : usr=32.52%, sys=0.79%, ctx=886, majf=0, minf=9 00:31:47.202 IO depths : 1=0.9%, 2=2.3%, 4=10.5%, 8=74.0%, 16=12.3%, 32=0.0%, >=64=0.0% 00:31:47.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 complete : 0=0.0%, 4=90.0%, 8=5.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.202 issued rwts: total=2310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.202 filename1: (groupid=0, jobs=1): err= 0: pid=109390: Fri Dec 6 13:25:33 2024 00:31:47.202 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10031msec) 00:31:47.202 slat (usec): min=5, max=8030, avg=16.49, stdev=183.87 00:31:47.203 clat (msec): min=37, max=188, avg=84.09, stdev=22.87 00:31:47.203 lat (msec): min=37, max=188, avg=84.10, stdev=22.87 00:31:47.203 clat percentiles (msec): 00:31:47.203 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 70], 00:31:47.203 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 84], 00:31:47.203 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 120], 95.00th=[ 125], 00:31:47.203 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 188], 99.95th=[ 188], 00:31:47.203 | 99.99th=[ 188] 00:31:47.203 bw ( KiB/s): min= 512, max= 944, per=3.65%, avg=746.47, stdev=111.19, samples=19 00:31:47.203 iops : min= 128, max= 236, avg=186.58, stdev=27.78, samples=19 00:31:47.203 lat (msec) : 50=6.46%, 100=71.95%, 250=21.59% 00:31:47.203 cpu : usr=32.58%, sys=0.58%, ctx=903, majf=0, minf=9 00:31:47.203 IO depths : 1=2.9%, 2=6.5%, 4=17.1%, 8=63.6%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:47.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.203 filename2: (groupid=0, jobs=1): err= 0: pid=109391: Fri Dec 6 13:25:33 2024 00:31:47.203 read: IOPS=194, BW=777KiB/s (795kB/s)(7776KiB/10013msec) 00:31:47.203 slat (usec): min=5, max=6021, avg=19.97, stdev=187.48 00:31:47.203 clat (msec): min=34, max=164, avg=82.29, stdev=21.44 00:31:47.203 lat (msec): min=34, max=164, avg=82.31, stdev=21.44 00:31:47.203 clat percentiles (msec): 00:31:47.203 | 1.00th=[ 36], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 68], 00:31:47.203 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 84], 00:31:47.203 | 70.00th=[ 92], 80.00th=[ 100], 90.00th=[ 116], 95.00th=[ 121], 00:31:47.203 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 165], 99.95th=[ 165], 00:31:47.203 | 99.99th=[ 165] 00:31:47.203 bw ( KiB/s): min= 512, max= 944, per=3.81%, avg=777.79, stdev=121.16, samples=19 00:31:47.203 iops : min= 128, max= 236, avg=194.37, stdev=30.28, samples=19 00:31:47.203 lat (msec) : 50=4.68%, 100=76.18%, 250=19.14% 00:31:47.203 cpu : usr=43.22%, sys=0.97%, ctx=1484, majf=0, minf=9 00:31:47.203 IO depths : 1=3.0%, 2=6.9%, 4=18.0%, 8=61.9%, 16=10.1%, 32=0.0%, >=64=0.0% 00:31:47.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 complete : 0=0.0%, 4=92.3%, 8=2.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 issued rwts: total=1944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.203 filename2: (groupid=0, jobs=1): err= 0: pid=109392: Fri Dec 6 13:25:33 2024 00:31:47.203 read: IOPS=189, BW=757KiB/s (775kB/s)(7584KiB/10015msec) 00:31:47.203 slat (usec): min=5, max=8033, avg=17.60, stdev=185.83 00:31:47.203 clat (msec): min=26, max=179, avg=84.37, stdev=23.21 00:31:47.203 lat (msec): min=26, max=179, avg=84.38, stdev=23.21 00:31:47.203 clat percentiles (msec): 00:31:47.203 | 1.00th=[ 36], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 69], 00:31:47.203 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 86], 00:31:47.203 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 118], 95.00th=[ 121], 00:31:47.203 | 99.00th=[ 157], 99.50th=[ 174], 99.90th=[ 180], 99.95th=[ 180], 00:31:47.203 | 99.99th=[ 180] 00:31:47.203 bw ( KiB/s): min= 512, max= 880, per=3.67%, avg=750.95, stdev=103.86, samples=19 00:31:47.203 iops : min= 128, max= 220, avg=187.68, stdev=26.02, samples=19 00:31:47.203 lat (msec) : 50=4.96%, 100=74.26%, 250=20.78% 00:31:47.203 cpu : usr=35.89%, sys=0.73%, ctx=1066, majf=0, minf=9 00:31:47.203 IO depths : 1=2.7%, 2=6.3%, 4=16.7%, 8=64.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:31:47.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 complete : 0=0.0%, 4=91.8%, 8=2.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.203 filename2: (groupid=0, jobs=1): err= 0: pid=109393: Fri Dec 6 13:25:33 2024 00:31:47.203 read: IOPS=238, BW=952KiB/s (975kB/s)(9548KiB/10029msec) 00:31:47.203 slat (usec): min=4, max=4017, avg=12.28, stdev=82.15 00:31:47.203 clat (msec): min=28, max=156, avg=67.11, stdev=22.89 00:31:47.203 lat (msec): min=28, max=156, avg=67.13, stdev=22.89 00:31:47.203 clat percentiles (msec): 00:31:47.203 | 1.00th=[ 31], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 48], 00:31:47.203 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 72], 00:31:47.203 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 109], 00:31:47.203 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:31:47.203 | 99.99th=[ 157] 00:31:47.203 bw ( KiB/s): min= 688, max= 1248, per=4.65%, avg=950.80, stdev=178.59, samples=20 00:31:47.203 iops : min= 172, max= 312, avg=237.70, stdev=44.65, samples=20 00:31:47.203 lat (msec) : 50=31.21%, 100=59.57%, 250=9.22% 00:31:47.203 cpu : usr=33.12%, sys=0.79%, ctx=893, majf=0, minf=9 00:31:47.203 IO depths : 1=0.4%, 2=0.9%, 4=6.7%, 8=78.6%, 16=13.3%, 32=0.0%, >=64=0.0% 00:31:47.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 complete : 0=0.0%, 4=89.1%, 8=6.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 issued rwts: total=2387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.203 filename2: (groupid=0, jobs=1): err= 0: pid=109394: Fri Dec 6 13:25:33 2024 00:31:47.203 read: IOPS=194, BW=777KiB/s (795kB/s)(7784KiB/10022msec) 00:31:47.203 slat (usec): min=5, max=8019, avg=23.75, stdev=291.82 00:31:47.203 clat (msec): min=27, max=193, avg=82.16, stdev=23.38 00:31:47.203 lat (msec): min=27, max=193, avg=82.19, stdev=23.39 00:31:47.203 clat percentiles (msec): 00:31:47.203 | 1.00th=[ 34], 5.00th=[ 47], 10.00th=[ 53], 20.00th=[ 68], 00:31:47.203 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 85], 00:31:47.203 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 116], 95.00th=[ 123], 00:31:47.203 | 99.00th=[ 150], 99.50th=[ 182], 99.90th=[ 194], 99.95th=[ 194], 00:31:47.203 | 99.99th=[ 194] 00:31:47.203 bw ( KiB/s): min= 510, max= 984, per=3.75%, avg=766.00, stdev=136.48, samples=19 00:31:47.203 iops : min= 127, max= 246, avg=191.42, stdev=34.15, samples=19 00:31:47.203 lat (msec) : 50=9.35%, 100=74.25%, 250=16.39% 00:31:47.203 cpu : usr=36.34%, sys=0.68%, ctx=1044, majf=0, minf=9 00:31:47.203 IO depths : 1=3.1%, 2=6.8%, 4=18.1%, 8=62.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:31:47.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 complete : 0=0.0%, 4=92.0%, 8=2.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 issued rwts: total=1946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.203 filename2: (groupid=0, jobs=1): err= 0: pid=109395: Fri Dec 6 13:25:33 2024 00:31:47.203 read: IOPS=184, BW=738KiB/s (756kB/s)(7404KiB/10032msec) 00:31:47.203 slat (usec): min=4, max=8023, avg=20.35, stdev=263.34 00:31:47.203 clat (msec): min=32, max=203, avg=86.52, stdev=24.74 00:31:47.203 lat (msec): min=32, max=203, avg=86.54, stdev=24.74 00:31:47.203 clat percentiles (msec): 00:31:47.203 | 1.00th=[ 35], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 70], 00:31:47.203 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 88], 00:31:47.203 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 132], 00:31:47.203 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 203], 99.95th=[ 203], 00:31:47.203 | 99.99th=[ 203] 00:31:47.203 bw ( KiB/s): min= 512, max= 896, per=3.59%, avg=733.05, stdev=113.80, samples=19 00:31:47.203 iops : min= 128, max= 224, avg=183.26, stdev=28.45, samples=19 00:31:47.203 lat (msec) : 50=5.62%, 100=71.26%, 250=23.12% 00:31:47.203 cpu : usr=32.55%, sys=0.68%, ctx=894, majf=0, minf=9 00:31:47.203 IO depths : 1=2.0%, 2=5.1%, 4=15.7%, 8=66.5%, 16=10.7%, 32=0.0%, >=64=0.0% 00:31:47.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 complete : 0=0.0%, 4=91.5%, 8=3.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 issued rwts: total=1851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.203 filename2: (groupid=0, jobs=1): err= 0: pid=109396: Fri Dec 6 13:25:33 2024 00:31:47.203 read: IOPS=250, BW=1002KiB/s (1027kB/s)(9.83MiB/10039msec) 00:31:47.203 slat (usec): min=5, max=8020, avg=23.90, stdev=252.51 00:31:47.203 clat (msec): min=4, max=136, avg=63.61, stdev=21.25 00:31:47.203 lat (msec): min=4, max=136, avg=63.64, stdev=21.26 00:31:47.203 clat percentiles (msec): 00:31:47.203 | 1.00th=[ 8], 5.00th=[ 34], 10.00th=[ 42], 20.00th=[ 48], 00:31:47.203 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 69], 00:31:47.203 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 90], 95.00th=[ 104], 00:31:47.203 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 138], 99.95th=[ 138], 00:31:47.203 | 99.99th=[ 138] 00:31:47.203 bw ( KiB/s): min= 688, max= 1664, per=4.89%, avg=999.75, stdev=209.79, samples=20 00:31:47.203 iops : min= 172, max= 416, avg=249.90, stdev=52.47, samples=20 00:31:47.203 lat (msec) : 10=1.91%, 20=1.83%, 50=24.48%, 100=66.65%, 250=5.13% 00:31:47.203 cpu : usr=43.37%, sys=1.02%, ctx=1291, majf=0, minf=9 00:31:47.203 IO depths : 1=1.6%, 2=3.5%, 4=10.7%, 8=72.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:31:47.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 complete : 0=0.0%, 4=90.3%, 8=4.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.203 issued rwts: total=2516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.203 filename2: (groupid=0, jobs=1): err= 0: pid=109397: Fri Dec 6 13:25:33 2024 00:31:47.203 read: IOPS=218, BW=875KiB/s (896kB/s)(8768KiB/10025msec) 00:31:47.203 slat (usec): min=5, max=8021, avg=19.45, stdev=241.89 00:31:47.203 clat (msec): min=16, max=150, avg=72.96, stdev=24.00 00:31:47.203 lat (msec): min=16, max=150, avg=72.98, stdev=24.01 00:31:47.203 clat percentiles (msec): 00:31:47.203 | 1.00th=[ 19], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 52], 00:31:47.203 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 75], 00:31:47.203 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 106], 95.00th=[ 121], 00:31:47.203 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150], 00:31:47.203 | 99.99th=[ 150] 00:31:47.203 bw ( KiB/s): min= 510, max= 1240, per=4.28%, avg=874.20, stdev=165.97, samples=20 00:31:47.204 iops : min= 127, max= 310, avg=218.50, stdev=41.55, samples=20 00:31:47.204 lat (msec) : 20=1.46%, 50=16.88%, 100=70.57%, 250=11.09% 00:31:47.204 cpu : usr=35.61%, sys=0.71%, ctx=1003, majf=0, minf=9 00:31:47.204 IO depths : 1=1.0%, 2=2.3%, 4=8.5%, 8=75.2%, 16=13.0%, 32=0.0%, >=64=0.0% 00:31:47.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.204 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.204 issued rwts: total=2192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.204 filename2: (groupid=0, jobs=1): err= 0: pid=109398: Fri Dec 6 13:25:33 2024 00:31:47.204 read: IOPS=220, BW=881KiB/s (902kB/s)(8828KiB/10024msec) 00:31:47.204 slat (usec): min=5, max=8018, avg=18.37, stdev=233.73 00:31:47.204 clat (msec): min=23, max=137, avg=72.57, stdev=22.72 00:31:47.204 lat (msec): min=23, max=137, avg=72.59, stdev=22.72 00:31:47.204 clat percentiles (msec): 00:31:47.204 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 52], 00:31:47.204 | 30.00th=[ 58], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 74], 00:31:47.204 | 70.00th=[ 80], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 120], 00:31:47.204 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 138], 00:31:47.204 | 99.99th=[ 138] 00:31:47.204 bw ( KiB/s): min= 560, max= 1202, per=4.29%, avg=876.40, stdev=169.10, samples=20 00:31:47.204 iops : min= 140, max= 300, avg=219.05, stdev=42.27, samples=20 00:31:47.204 lat (msec) : 50=17.94%, 100=70.05%, 250=12.01% 00:31:47.204 cpu : usr=40.26%, sys=0.84%, ctx=1275, majf=0, minf=9 00:31:47.204 IO depths : 1=2.2%, 2=4.6%, 4=12.3%, 8=69.4%, 16=11.5%, 32=0.0%, >=64=0.0% 00:31:47.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.204 complete : 0=0.0%, 4=90.9%, 8=4.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.204 issued rwts: total=2207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:47.204 00:31:47.204 Run status group 0 (all jobs): 00:31:47.204 READ: bw=19.9MiB/s (20.9MB/s), 732KiB/s-1072KiB/s (750kB/s-1098kB/s), io=200MiB (210MB), run=10003-10045msec 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.204 bdev_null0 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.204 [2024-12-06 13:25:34.289505] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.204 bdev_null1 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.204 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:47.205 { 00:31:47.205 "params": { 00:31:47.205 "name": "Nvme$subsystem", 00:31:47.205 "trtype": "$TEST_TRANSPORT", 00:31:47.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:47.205 "adrfam": "ipv4", 00:31:47.205 "trsvcid": "$NVMF_PORT", 00:31:47.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:47.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:47.205 "hdgst": ${hdgst:-false}, 00:31:47.205 "ddgst": ${ddgst:-false} 00:31:47.205 }, 00:31:47.205 "method": "bdev_nvme_attach_controller" 00:31:47.205 } 00:31:47.205 EOF 00:31:47.205 )") 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:47.205 { 00:31:47.205 "params": { 00:31:47.205 "name": "Nvme$subsystem", 00:31:47.205 "trtype": "$TEST_TRANSPORT", 00:31:47.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:47.205 "adrfam": "ipv4", 00:31:47.205 "trsvcid": "$NVMF_PORT", 00:31:47.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:47.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:47.205 "hdgst": ${hdgst:-false}, 00:31:47.205 "ddgst": ${ddgst:-false} 00:31:47.205 }, 00:31:47.205 "method": "bdev_nvme_attach_controller" 00:31:47.205 } 00:31:47.205 EOF 00:31:47.205 )") 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:47.205 "params": { 00:31:47.205 "name": "Nvme0", 00:31:47.205 "trtype": "tcp", 00:31:47.205 "traddr": "10.0.0.3", 00:31:47.205 "adrfam": "ipv4", 00:31:47.205 "trsvcid": "4420", 00:31:47.205 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:47.205 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:47.205 "hdgst": false, 00:31:47.205 "ddgst": false 00:31:47.205 }, 00:31:47.205 "method": "bdev_nvme_attach_controller" 00:31:47.205 },{ 00:31:47.205 "params": { 00:31:47.205 "name": "Nvme1", 00:31:47.205 "trtype": "tcp", 00:31:47.205 "traddr": "10.0.0.3", 00:31:47.205 "adrfam": "ipv4", 00:31:47.205 "trsvcid": "4420", 00:31:47.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:47.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:47.205 "hdgst": false, 00:31:47.205 "ddgst": false 00:31:47.205 }, 00:31:47.205 "method": "bdev_nvme_attach_controller" 00:31:47.205 }' 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:47.205 13:25:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:47.205 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:47.205 ... 00:31:47.205 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:47.205 ... 00:31:47.205 fio-3.35 00:31:47.205 Starting 4 threads 00:31:52.476 00:31:52.476 filename0: (groupid=0, jobs=1): err= 0: pid=109529: Fri Dec 6 13:25:40 2024 00:31:52.476 read: IOPS=1948, BW=15.2MiB/s (16.0MB/s)(76.1MiB/5002msec) 00:31:52.476 slat (nsec): min=4093, max=48161, avg=11896.16, stdev=4434.76 00:31:52.476 clat (usec): min=3043, max=5243, avg=4053.10, stdev=82.08 00:31:52.476 lat (usec): min=3055, max=5256, avg=4065.00, stdev=81.46 00:31:52.476 clat percentiles (usec): 00:31:52.476 | 1.00th=[ 3916], 5.00th=[ 3949], 10.00th=[ 3982], 20.00th=[ 4015], 00:31:52.476 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4047], 00:31:52.476 | 70.00th=[ 4080], 80.00th=[ 4113], 90.00th=[ 4146], 95.00th=[ 4178], 00:31:52.476 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 4883], 99.95th=[ 4948], 00:31:52.476 | 99.99th=[ 5276] 00:31:52.476 bw ( KiB/s): min=15232, max=15744, per=25.00%, avg=15587.56, stdev=178.49, samples=9 00:31:52.476 iops : min= 1904, max= 1968, avg=1948.44, stdev=22.31, samples=9 00:31:52.476 lat (msec) : 4=17.91%, 10=82.09% 00:31:52.476 cpu : usr=94.92%, sys=4.02%, ctx=6, majf=0, minf=0 00:31:52.476 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.476 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.476 issued rwts: total=9744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.476 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:52.476 filename0: (groupid=0, jobs=1): err= 0: pid=109530: Fri Dec 6 13:25:40 2024 00:31:52.476 read: IOPS=1946, BW=15.2MiB/s (15.9MB/s)(76.1MiB/5001msec) 00:31:52.476 slat (nsec): min=5098, max=43456, avg=13210.81, stdev=4487.61 00:31:52.476 clat (usec): min=2918, max=7149, avg=4041.63, stdev=136.44 00:31:52.476 lat (usec): min=2931, max=7181, avg=4054.84, stdev=136.76 00:31:52.476 clat percentiles (usec): 00:31:52.476 | 1.00th=[ 3916], 5.00th=[ 3949], 10.00th=[ 3949], 20.00th=[ 3982], 00:31:52.476 | 30.00th=[ 4015], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4047], 00:31:52.476 | 70.00th=[ 4080], 80.00th=[ 4080], 90.00th=[ 4113], 95.00th=[ 4146], 00:31:52.476 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 6194], 99.95th=[ 7111], 00:31:52.476 | 99.99th=[ 7177] 00:31:52.476 bw ( KiB/s): min=15232, max=15744, per=24.99%, avg=15576.67, stdev=196.21, samples=9 00:31:52.476 iops : min= 1904, max= 1968, avg=1947.00, stdev=24.68, samples=9 00:31:52.476 lat (msec) : 4=26.38%, 10=73.62% 00:31:52.476 cpu : usr=94.66%, sys=4.32%, ctx=5, majf=0, minf=0 00:31:52.476 IO depths : 1=12.2%, 2=25.0%, 4=50.0%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.476 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.476 issued rwts: total=9736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.476 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:52.476 filename1: (groupid=0, jobs=1): err= 0: pid=109531: Fri Dec 6 13:25:40 2024 00:31:52.476 read: IOPS=1953, BW=15.3MiB/s (16.0MB/s)(76.4MiB/5004msec) 00:31:52.476 slat (nsec): min=6977, max=51360, avg=8869.61, stdev=3025.32 00:31:52.476 clat (usec): min=1266, max=5119, avg=4048.24, stdev=194.73 00:31:52.476 lat (usec): min=1281, max=5126, avg=4057.11, stdev=194.59 00:31:52.476 clat percentiles (usec): 00:31:52.476 | 1.00th=[ 3916], 5.00th=[ 3982], 10.00th=[ 3982], 20.00th=[ 4015], 00:31:52.476 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4080], 00:31:52.476 | 70.00th=[ 4080], 80.00th=[ 4113], 90.00th=[ 4146], 95.00th=[ 4178], 00:31:52.476 | 99.00th=[ 4293], 99.50th=[ 4293], 99.90th=[ 5014], 99.95th=[ 5080], 00:31:52.476 | 99.99th=[ 5145] 00:31:52.476 bw ( KiB/s): min=15360, max=15744, per=25.10%, avg=15644.44, stdev=124.39, samples=9 00:31:52.476 iops : min= 1920, max= 1968, avg=1955.56, stdev=15.55, samples=9 00:31:52.476 lat (msec) : 2=0.41%, 4=11.93%, 10=87.66% 00:31:52.476 cpu : usr=94.00%, sys=4.84%, ctx=18, majf=0, minf=0 00:31:52.476 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.476 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.476 issued rwts: total=9776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.476 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:52.476 filename1: (groupid=0, jobs=1): err= 0: pid=109532: Fri Dec 6 13:25:40 2024 00:31:52.476 read: IOPS=1946, BW=15.2MiB/s (15.9MB/s)(76.1MiB/5001msec) 00:31:52.476 slat (usec): min=5, max=1140, avg=12.55, stdev=12.45 00:31:52.476 clat (usec): min=1165, max=6742, avg=4046.43, stdev=133.06 00:31:52.476 lat (usec): min=1176, max=6750, avg=4058.98, stdev=133.34 00:31:52.476 clat percentiles (usec): 00:31:52.476 | 1.00th=[ 3884], 5.00th=[ 3949], 10.00th=[ 3949], 20.00th=[ 3982], 00:31:52.476 | 30.00th=[ 4015], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4047], 00:31:52.476 | 70.00th=[ 4080], 80.00th=[ 4080], 90.00th=[ 4113], 95.00th=[ 4146], 00:31:52.476 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 6456], 99.95th=[ 6521], 00:31:52.476 | 99.99th=[ 6718] 00:31:52.476 bw ( KiB/s): min=15232, max=15744, per=24.99%, avg=15576.67, stdev=196.21, samples=9 00:31:52.476 iops : min= 1904, max= 1968, avg=1947.00, stdev=24.68, samples=9 00:31:52.476 lat (msec) : 2=0.01%, 4=22.96%, 10=77.03% 00:31:52.476 cpu : usr=94.40%, sys=4.50%, ctx=11, majf=0, minf=0 00:31:52.476 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.476 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.476 issued rwts: total=9736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.476 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:52.476 00:31:52.476 Run status group 0 (all jobs): 00:31:52.477 READ: bw=60.9MiB/s (63.8MB/s), 15.2MiB/s-15.3MiB/s (15.9MB/s-16.0MB/s), io=305MiB (319MB), run=5001-5004msec 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:52.477 ************************************ 00:31:52.477 END TEST fio_dif_rand_params 00:31:52.477 ************************************ 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.477 00:31:52.477 real 0m24.016s 00:31:52.477 user 2m7.070s 00:31:52.477 sys 0m4.655s 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.477 13:25:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:52.477 13:25:40 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:52.477 13:25:40 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:52.477 13:25:40 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.477 13:25:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:52.477 ************************************ 00:31:52.477 START TEST fio_dif_digest 00:31:52.477 ************************************ 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:52.477 bdev_null0 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:52.477 [2024-12-06 13:25:40.636272] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:52.477 { 00:31:52.477 "params": { 00:31:52.477 "name": "Nvme$subsystem", 00:31:52.477 "trtype": "$TEST_TRANSPORT", 00:31:52.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.477 "adrfam": "ipv4", 00:31:52.477 "trsvcid": "$NVMF_PORT", 00:31:52.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.477 "hdgst": ${hdgst:-false}, 00:31:52.477 "ddgst": ${ddgst:-false} 00:31:52.477 }, 00:31:52.477 "method": "bdev_nvme_attach_controller" 00:31:52.477 } 00:31:52.477 EOF 00:31:52.477 )") 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:52.477 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:52.478 "params": { 00:31:52.478 "name": "Nvme0", 00:31:52.478 "trtype": "tcp", 00:31:52.478 "traddr": "10.0.0.3", 00:31:52.478 "adrfam": "ipv4", 00:31:52.478 "trsvcid": "4420", 00:31:52.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:52.478 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:52.478 "hdgst": true, 00:31:52.478 "ddgst": true 00:31:52.478 }, 00:31:52.478 "method": "bdev_nvme_attach_controller" 00:31:52.478 }' 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:52.478 13:25:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:52.478 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:52.478 ... 00:31:52.478 fio-3.35 00:31:52.478 Starting 3 threads 00:32:04.678 00:32:04.678 filename0: (groupid=0, jobs=1): err= 0: pid=109634: Fri Dec 6 13:25:51 2024 00:32:04.678 read: IOPS=223, BW=28.0MiB/s (29.3MB/s)(280MiB/10005msec) 00:32:04.678 slat (nsec): min=7358, max=58249, avg=14942.68, stdev=5640.22 00:32:04.678 clat (usec): min=5857, max=95390, avg=13391.48, stdev=6891.17 00:32:04.678 lat (usec): min=5866, max=95403, avg=13406.43, stdev=6891.47 00:32:04.678 clat percentiles (usec): 00:32:04.678 | 1.00th=[10159], 5.00th=[10814], 10.00th=[11207], 20.00th=[11600], 00:32:04.678 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12518], 00:32:04.678 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13960], 00:32:04.678 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55313], 99.95th=[94897], 00:32:04.678 | 99.99th=[94897] 00:32:04.678 bw ( KiB/s): min=22784, max=32000, per=36.18%, avg=28510.32, stdev=3195.40, samples=19 00:32:04.678 iops : min= 178, max= 250, avg=222.74, stdev=24.96, samples=19 00:32:04.678 lat (msec) : 10=0.58%, 20=96.83%, 100=2.59% 00:32:04.678 cpu : usr=93.83%, sys=4.72%, ctx=78, majf=0, minf=0 00:32:04.678 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.678 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.678 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:04.678 filename0: (groupid=0, jobs=1): err= 0: pid=109635: Fri Dec 6 13:25:51 2024 00:32:04.678 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(264MiB/10004msec) 00:32:04.678 slat (nsec): min=7825, max=95803, avg=18505.08, stdev=9192.81 00:32:04.678 clat (usec): min=7143, max=18042, avg=14212.25, stdev=2048.34 00:32:04.678 lat (usec): min=7151, max=18063, avg=14230.75, stdev=2047.81 00:32:04.678 clat percentiles (usec): 00:32:04.678 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[13304], 00:32:04.678 | 30.00th=[13829], 40.00th=[14353], 50.00th=[14615], 60.00th=[15008], 00:32:04.678 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16319], 95.00th=[16712], 00:32:04.678 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:32:04.678 | 99.99th=[17957] 00:32:04.678 bw ( KiB/s): min=24832, max=28928, per=34.25%, avg=26984.95, stdev=1000.35, samples=19 00:32:04.678 iops : min= 194, max= 226, avg=210.79, stdev= 7.82, samples=19 00:32:04.678 lat (msec) : 10=8.97%, 20=91.03% 00:32:04.678 cpu : usr=92.26%, sys=5.78%, ctx=12, majf=0, minf=0 00:32:04.678 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.678 issued rwts: total=2108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.678 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:04.678 filename0: (groupid=0, jobs=1): err= 0: pid=109636: Fri Dec 6 13:25:51 2024 00:32:04.678 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(227MiB/10005msec) 00:32:04.678 slat (nsec): min=9268, max=60823, avg=15815.43, stdev=6209.30 00:32:04.678 clat (usec): min=5098, max=20304, avg=16534.64, stdev=2372.49 00:32:04.678 lat (usec): min=5109, max=20320, avg=16550.46, stdev=2371.42 00:32:04.678 clat percentiles (usec): 00:32:04.678 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[11338], 20.00th=[16188], 00:32:04.678 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:32:04.678 | 70.00th=[17695], 80.00th=[18220], 90.00th=[18482], 95.00th=[18744], 00:32:04.678 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20055], 99.95th=[20317], 00:32:04.678 | 99.99th=[20317] 00:32:04.678 bw ( KiB/s): min=21504, max=25600, per=29.51%, avg=23253.26, stdev=1311.70, samples=19 00:32:04.678 iops : min= 168, max= 200, avg=181.63, stdev=10.29, samples=19 00:32:04.678 lat (msec) : 10=1.82%, 20=98.01%, 50=0.17% 00:32:04.678 cpu : usr=94.14%, sys=4.63%, ctx=10, majf=0, minf=0 00:32:04.678 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.678 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.678 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:04.678 00:32:04.678 Run status group 0 (all jobs): 00:32:04.678 READ: bw=76.9MiB/s (80.7MB/s), 22.7MiB/s-28.0MiB/s (23.8MB/s-29.3MB/s), io=770MiB (807MB), run=10004-10005msec 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:04.678 ************************************ 00:32:04.678 END TEST fio_dif_digest 00:32:04.678 ************************************ 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.678 00:32:04.678 real 0m11.054s 00:32:04.678 user 0m28.730s 00:32:04.678 sys 0m1.797s 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:04.678 13:25:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:04.678 13:25:51 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:04.678 13:25:51 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:04.678 13:25:51 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:04.678 13:25:51 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:32:04.679 13:25:51 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.679 13:25:51 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:32:04.679 13:25:51 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.679 13:25:51 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.679 rmmod nvme_tcp 00:32:04.679 rmmod nvme_fabrics 00:32:04.679 rmmod nvme_keyring 00:32:04.679 13:25:51 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.679 13:25:51 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:32:04.679 13:25:51 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:32:04.679 13:25:51 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 108895 ']' 00:32:04.679 13:25:51 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 108895 00:32:04.679 13:25:51 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 108895 ']' 00:32:04.679 13:25:51 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 108895 00:32:04.679 13:25:51 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:32:04.679 13:25:51 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:04.679 13:25:51 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108895 00:32:04.679 killing process with pid 108895 00:32:04.679 13:25:51 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:04.679 13:25:51 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:04.679 13:25:51 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108895' 00:32:04.679 13:25:51 nvmf_dif -- common/autotest_common.sh@973 -- # kill 108895 00:32:04.679 13:25:51 nvmf_dif -- common/autotest_common.sh@978 -- # wait 108895 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:04.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:04.679 Waiting for block devices as requested 00:32:04.679 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:04.679 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.679 13:25:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:04.679 13:25:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.679 13:25:52 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:32:04.679 00:32:04.679 real 1m0.397s 00:32:04.679 user 3m53.946s 00:32:04.679 sys 0m13.795s 00:32:04.679 13:25:52 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:04.679 13:25:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:04.679 ************************************ 00:32:04.679 END TEST nvmf_dif 00:32:04.679 ************************************ 00:32:04.679 13:25:52 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:04.679 13:25:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:04.679 13:25:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:04.679 13:25:52 -- common/autotest_common.sh@10 -- # set +x 00:32:04.679 ************************************ 00:32:04.679 START TEST nvmf_abort_qd_sizes 00:32:04.679 ************************************ 00:32:04.679 13:25:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:04.679 * Looking for test storage... 00:32:04.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:04.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.679 --rc genhtml_branch_coverage=1 00:32:04.679 --rc genhtml_function_coverage=1 00:32:04.679 --rc genhtml_legend=1 00:32:04.679 --rc geninfo_all_blocks=1 00:32:04.679 --rc geninfo_unexecuted_blocks=1 00:32:04.679 00:32:04.679 ' 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:04.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.679 --rc genhtml_branch_coverage=1 00:32:04.679 --rc genhtml_function_coverage=1 00:32:04.679 --rc genhtml_legend=1 00:32:04.679 --rc geninfo_all_blocks=1 00:32:04.679 --rc geninfo_unexecuted_blocks=1 00:32:04.679 00:32:04.679 ' 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:04.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.679 --rc genhtml_branch_coverage=1 00:32:04.679 --rc genhtml_function_coverage=1 00:32:04.679 --rc genhtml_legend=1 00:32:04.679 --rc geninfo_all_blocks=1 00:32:04.679 --rc geninfo_unexecuted_blocks=1 00:32:04.679 00:32:04.679 ' 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:04.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.679 --rc genhtml_branch_coverage=1 00:32:04.679 --rc genhtml_function_coverage=1 00:32:04.679 --rc genhtml_legend=1 00:32:04.679 --rc geninfo_all_blocks=1 00:32:04.679 --rc geninfo_unexecuted_blocks=1 00:32:04.679 00:32:04.679 ' 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.679 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:04.680 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:04.680 Cannot find device "nvmf_init_br" 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:04.680 Cannot find device "nvmf_init_br2" 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:04.680 Cannot find device "nvmf_tgt_br" 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:04.680 Cannot find device "nvmf_tgt_br2" 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:04.680 Cannot find device "nvmf_init_br" 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:04.680 Cannot find device "nvmf_init_br2" 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:04.680 Cannot find device "nvmf_tgt_br" 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:04.680 Cannot find device "nvmf_tgt_br2" 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:04.680 Cannot find device "nvmf_br" 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:04.680 Cannot find device "nvmf_init_if" 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:04.680 Cannot find device "nvmf_init_if2" 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:04.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:04.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:04.680 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:04.939 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:04.940 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:04.940 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:04.940 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:04.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:04.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:32:04.940 00:32:04.940 --- 10.0.0.3 ping statistics --- 00:32:04.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.940 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:32:04.940 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:04.940 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:04.940 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:32:04.940 00:32:04.940 --- 10.0.0.4 ping statistics --- 00:32:04.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.940 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:32:04.940 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:04.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:04.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:32:04.940 00:32:04.940 --- 10.0.0.1 ping statistics --- 00:32:04.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.940 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:32:04.940 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:04.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:04.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:32:04.940 00:32:04.940 --- 10.0.0.2 ping statistics --- 00:32:04.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.940 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:32:04.940 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:04.940 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:32:04.940 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:04.940 13:25:53 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:05.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:05.766 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:05.766 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=110280 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 110280 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 110280 ']' 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:05.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:05.766 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:06.025 [2024-12-06 13:25:54.424122] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:32:06.025 [2024-12-06 13:25:54.424223] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.025 [2024-12-06 13:25:54.579139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:06.025 [2024-12-06 13:25:54.642185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.025 [2024-12-06 13:25:54.642242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.025 [2024-12-06 13:25:54.642261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.025 [2024-12-06 13:25:54.642272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.025 [2024-12-06 13:25:54.642281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.025 [2024-12-06 13:25:54.643565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.025 [2024-12-06 13:25:54.643637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:06.025 [2024-12-06 13:25:54.643684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:06.025 [2024-12-06 13:25:54.643688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:32:06.283 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:06.284 13:25:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:06.284 ************************************ 00:32:06.284 START TEST spdk_target_abort 00:32:06.284 ************************************ 00:32:06.284 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:32:06.284 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:06.284 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:32:06.284 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.284 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.542 spdk_targetn1 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.542 [2024-12-06 13:25:54.933038] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.542 [2024-12-06 13:25:54.967646] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:06.542 13:25:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:09.833 Initializing NVMe Controllers 00:32:09.833 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:32:09.833 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:09.833 Initialization complete. Launching workers. 00:32:09.833 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11203, failed: 0 00:32:09.833 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1080, failed to submit 10123 00:32:09.833 success 695, unsuccessful 385, failed 0 00:32:09.833 13:25:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:09.833 13:25:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:13.119 Initializing NVMe Controllers 00:32:13.119 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:32:13.119 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:13.119 Initialization complete. Launching workers. 00:32:13.119 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5948, failed: 0 00:32:13.119 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1240, failed to submit 4708 00:32:13.119 success 245, unsuccessful 995, failed 0 00:32:13.120 13:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:13.120 13:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:16.398 Initializing NVMe Controllers 00:32:16.398 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:32:16.398 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:16.398 Initialization complete. Launching workers. 00:32:16.398 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29073, failed: 0 00:32:16.398 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2681, failed to submit 26392 00:32:16.398 success 318, unsuccessful 2363, failed 0 00:32:16.398 13:26:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:16.398 13:26:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.398 13:26:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:16.398 13:26:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.398 13:26:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:16.398 13:26:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.398 13:26:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:17.331 13:26:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.331 13:26:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 110280 00:32:17.331 13:26:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 110280 ']' 00:32:17.331 13:26:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 110280 00:32:17.331 13:26:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:32:17.331 13:26:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:17.331 13:26:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110280 00:32:17.331 13:26:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:17.331 killing process with pid 110280 00:32:17.331 13:26:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:17.331 13:26:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110280' 00:32:17.331 13:26:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 110280 00:32:17.331 13:26:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 110280 00:32:17.589 ************************************ 00:32:17.589 END TEST spdk_target_abort 00:32:17.589 ************************************ 00:32:17.589 00:32:17.589 real 0m11.260s 00:32:17.589 user 0m41.925s 00:32:17.589 sys 0m1.708s 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:17.589 13:26:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:17.589 13:26:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:17.589 13:26:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.589 13:26:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:17.589 ************************************ 00:32:17.589 START TEST kernel_target_abort 00:32:17.589 ************************************ 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:17.589 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:18.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:18.155 Waiting for block devices as requested 00:32:18.155 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:18.155 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:32:18.412 No valid GPT data, bailing 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:32:18.412 No valid GPT data, bailing 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:32:18.412 No valid GPT data, bailing 00:32:18.412 13:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:32:18.412 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:32:18.412 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:32:18.412 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:32:18.412 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:18.412 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:32:18.412 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:32:18.412 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:32:18.412 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:32:18.412 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:18.412 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:32:18.412 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:32:18.412 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:32:18.670 No valid GPT data, bailing 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 --hostid=02865f8e-a4d1-44f6-b22e-478c939aabc8 -a 10.0.0.1 -t tcp -s 4420 00:32:18.670 00:32:18.670 Discovery Log Number of Records 2, Generation counter 2 00:32:18.670 =====Discovery Log Entry 0====== 00:32:18.670 trtype: tcp 00:32:18.670 adrfam: ipv4 00:32:18.670 subtype: current discovery subsystem 00:32:18.670 treq: not specified, sq flow control disable supported 00:32:18.670 portid: 1 00:32:18.670 trsvcid: 4420 00:32:18.670 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:18.670 traddr: 10.0.0.1 00:32:18.670 eflags: none 00:32:18.670 sectype: none 00:32:18.670 =====Discovery Log Entry 1====== 00:32:18.670 trtype: tcp 00:32:18.670 adrfam: ipv4 00:32:18.670 subtype: nvme subsystem 00:32:18.670 treq: not specified, sq flow control disable supported 00:32:18.670 portid: 1 00:32:18.670 trsvcid: 4420 00:32:18.670 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:18.670 traddr: 10.0.0.1 00:32:18.670 eflags: none 00:32:18.670 sectype: none 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:18.670 13:26:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:21.949 Initializing NVMe Controllers 00:32:21.949 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:21.949 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:21.949 Initialization complete. Launching workers. 00:32:21.949 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34662, failed: 0 00:32:21.949 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34662, failed to submit 0 00:32:21.949 success 0, unsuccessful 34662, failed 0 00:32:21.949 13:26:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:21.949 13:26:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:25.257 Initializing NVMe Controllers 00:32:25.257 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:25.257 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:25.257 Initialization complete. Launching workers. 00:32:25.257 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69098, failed: 0 00:32:25.257 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30136, failed to submit 38962 00:32:25.257 success 0, unsuccessful 30136, failed 0 00:32:25.257 13:26:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:25.257 13:26:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:28.540 Initializing NVMe Controllers 00:32:28.540 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:28.540 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:28.540 Initialization complete. Launching workers. 00:32:28.540 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82583, failed: 0 00:32:28.540 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20610, failed to submit 61973 00:32:28.540 success 0, unsuccessful 20610, failed 0 00:32:28.540 13:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:28.540 13:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:28.540 13:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:32:28.540 13:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:28.540 13:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:28.540 13:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:28.540 13:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:28.540 13:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:28.540 13:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:28.540 13:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:28.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:31.332 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:31.332 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:31.332 00:32:31.332 real 0m13.382s 00:32:31.332 user 0m6.369s 00:32:31.332 sys 0m4.481s 00:32:31.332 13:26:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.332 13:26:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.332 ************************************ 00:32:31.332 END TEST kernel_target_abort 00:32:31.332 ************************************ 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:31.332 rmmod nvme_tcp 00:32:31.332 rmmod nvme_fabrics 00:32:31.332 rmmod nvme_keyring 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 110280 ']' 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 110280 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 110280 ']' 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 110280 00:32:31.332 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (110280) - No such process 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 110280 is not found' 00:32:31.332 Process with pid 110280 is not found 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:31.332 13:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:31.591 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:31.591 Waiting for block devices as requested 00:32:31.591 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:31.591 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:32:31.850 00:32:31.850 real 0m27.540s 00:32:31.850 user 0m49.358s 00:32:31.850 sys 0m7.564s 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.850 13:26:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:31.850 ************************************ 00:32:31.850 END TEST nvmf_abort_qd_sizes 00:32:31.850 ************************************ 00:32:32.109 13:26:20 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:32:32.109 13:26:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:32.109 13:26:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.109 13:26:20 -- common/autotest_common.sh@10 -- # set +x 00:32:32.109 ************************************ 00:32:32.109 START TEST keyring_file 00:32:32.109 ************************************ 00:32:32.109 13:26:20 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:32:32.109 * Looking for test storage... 00:32:32.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:32:32.109 13:26:20 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:32.109 13:26:20 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:32:32.109 13:26:20 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:32.109 13:26:20 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@345 -- # : 1 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@353 -- # local d=1 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@355 -- # echo 1 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@353 -- # local d=2 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@355 -- # echo 2 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:32.109 13:26:20 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:32.110 13:26:20 keyring_file -- scripts/common.sh@368 -- # return 0 00:32:32.110 13:26:20 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:32.110 13:26:20 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:32.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.110 --rc genhtml_branch_coverage=1 00:32:32.110 --rc genhtml_function_coverage=1 00:32:32.110 --rc genhtml_legend=1 00:32:32.110 --rc geninfo_all_blocks=1 00:32:32.110 --rc geninfo_unexecuted_blocks=1 00:32:32.110 00:32:32.110 ' 00:32:32.110 13:26:20 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:32.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.110 --rc genhtml_branch_coverage=1 00:32:32.110 --rc genhtml_function_coverage=1 00:32:32.110 --rc genhtml_legend=1 00:32:32.110 --rc geninfo_all_blocks=1 00:32:32.110 --rc geninfo_unexecuted_blocks=1 00:32:32.110 00:32:32.110 ' 00:32:32.110 13:26:20 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:32.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.110 --rc genhtml_branch_coverage=1 00:32:32.110 --rc genhtml_function_coverage=1 00:32:32.110 --rc genhtml_legend=1 00:32:32.110 --rc geninfo_all_blocks=1 00:32:32.110 --rc geninfo_unexecuted_blocks=1 00:32:32.110 00:32:32.110 ' 00:32:32.110 13:26:20 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:32.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.110 --rc genhtml_branch_coverage=1 00:32:32.110 --rc genhtml_function_coverage=1 00:32:32.110 --rc genhtml_legend=1 00:32:32.110 --rc geninfo_all_blocks=1 00:32:32.110 --rc geninfo_unexecuted_blocks=1 00:32:32.110 00:32:32.110 ' 00:32:32.110 13:26:20 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:32:32.110 13:26:20 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:32.110 13:26:20 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:32:32.110 13:26:20 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.110 13:26:20 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.110 13:26:20 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.110 13:26:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.110 13:26:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.110 13:26:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.110 13:26:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:32.110 13:26:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@51 -- # : 0 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:32.110 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:32.110 13:26:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:32.110 13:26:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:32.110 13:26:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:32.110 13:26:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:32.110 13:26:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:32.110 13:26:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:32.110 13:26:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:32.110 13:26:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:32.110 13:26:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:32.110 13:26:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:32.110 13:26:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:32.110 13:26:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:32.110 13:26:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Bdv7VIlwwk 00:32:32.110 13:26:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:32.110 13:26:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:32.369 13:26:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Bdv7VIlwwk 00:32:32.369 13:26:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Bdv7VIlwwk 00:32:32.369 13:26:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Bdv7VIlwwk 00:32:32.369 13:26:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:32.369 13:26:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:32.369 13:26:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:32.369 13:26:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:32.369 13:26:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:32.369 13:26:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:32.369 13:26:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DIpSpoy7UB 00:32:32.369 13:26:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:32.369 13:26:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:32.369 13:26:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:32.369 13:26:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:32.369 13:26:20 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:32.369 13:26:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:32.369 13:26:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:32.369 13:26:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DIpSpoy7UB 00:32:32.369 13:26:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DIpSpoy7UB 00:32:32.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.369 13:26:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.DIpSpoy7UB 00:32:32.369 13:26:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=111188 00:32:32.369 13:26:20 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:32.369 13:26:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 111188 00:32:32.369 13:26:20 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 111188 ']' 00:32:32.369 13:26:20 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.370 13:26:20 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.370 13:26:20 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.370 13:26:20 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.370 13:26:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:32.370 [2024-12-06 13:26:20.888400] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:32:32.370 [2024-12-06 13:26:20.888688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111188 ] 00:32:32.688 [2024-12-06 13:26:21.039254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.688 [2024-12-06 13:26:21.104414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:32.963 13:26:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:32.963 [2024-12-06 13:26:21.396533] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.963 null0 00:32:32.963 [2024-12-06 13:26:21.428645] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:32.963 [2024-12-06 13:26:21.428976] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.963 13:26:21 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:32.963 [2024-12-06 13:26:21.460629] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:32.963 2024/12/06 13:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:32:32.963 request: 00:32:32.963 { 00:32:32.963 "method": "nvmf_subsystem_add_listener", 00:32:32.963 "params": { 00:32:32.963 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:32.963 "secure_channel": false, 00:32:32.963 "listen_address": { 00:32:32.963 "trtype": "tcp", 00:32:32.963 "traddr": "127.0.0.1", 00:32:32.963 "trsvcid": "4420" 00:32:32.963 } 00:32:32.963 } 00:32:32.963 } 00:32:32.963 Got JSON-RPC error response 00:32:32.963 GoRPCClient: error on JSON-RPC call 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:32.963 13:26:21 keyring_file -- keyring/file.sh@47 -- # bperfpid=111206 00:32:32.963 13:26:21 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:32.963 13:26:21 keyring_file -- keyring/file.sh@49 -- # waitforlisten 111206 /var/tmp/bperf.sock 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 111206 ']' 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:32.963 13:26:21 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.964 13:26:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:32.964 [2024-12-06 13:26:21.523434] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:32:32.964 [2024-12-06 13:26:21.523542] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111206 ] 00:32:33.222 [2024-12-06 13:26:21.671872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.222 [2024-12-06 13:26:21.747718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.480 13:26:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.480 13:26:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:33.480 13:26:21 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Bdv7VIlwwk 00:32:33.480 13:26:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Bdv7VIlwwk 00:32:33.738 13:26:22 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DIpSpoy7UB 00:32:33.738 13:26:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DIpSpoy7UB 00:32:33.997 13:26:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:33.997 13:26:22 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:32:33.997 13:26:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:33.997 13:26:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:33.997 13:26:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.255 13:26:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Bdv7VIlwwk == \/\t\m\p\/\t\m\p\.\B\d\v\7\V\I\l\w\w\k ]] 00:32:34.255 13:26:22 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:32:34.255 13:26:22 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:32:34.255 13:26:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:34.255 13:26:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.255 13:26:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:34.822 13:26:23 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.DIpSpoy7UB == \/\t\m\p\/\t\m\p\.\D\I\p\S\p\o\y\7\U\B ]] 00:32:34.822 13:26:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:32:34.822 13:26:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:34.822 13:26:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:34.822 13:26:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:34.822 13:26:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.822 13:26:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:35.080 13:26:23 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:35.080 13:26:23 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:32:35.080 13:26:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:35.080 13:26:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:35.080 13:26:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:35.080 13:26:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:35.080 13:26:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:35.339 13:26:23 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:32:35.339 13:26:23 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:35.339 13:26:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:35.597 [2024-12-06 13:26:24.013088] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:35.597 nvme0n1 00:32:35.597 13:26:24 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:32:35.597 13:26:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:35.597 13:26:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:35.597 13:26:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:35.597 13:26:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:35.597 13:26:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:35.854 13:26:24 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:32:35.854 13:26:24 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:32:35.854 13:26:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:35.854 13:26:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:35.854 13:26:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:35.854 13:26:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:35.854 13:26:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:36.112 13:26:24 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:32:36.112 13:26:24 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:36.370 Running I/O for 1 seconds... 00:32:37.349 11845.00 IOPS, 46.27 MiB/s 00:32:37.349 Latency(us) 00:32:37.349 [2024-12-06T13:26:25.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.349 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:37.349 nvme0n1 : 1.01 11897.25 46.47 0.00 0.00 10730.95 6285.50 19541.64 00:32:37.349 [2024-12-06T13:26:25.989Z] =================================================================================================================== 00:32:37.349 [2024-12-06T13:26:25.989Z] Total : 11897.25 46.47 0.00 0.00 10730.95 6285.50 19541.64 00:32:37.349 { 00:32:37.349 "results": [ 00:32:37.349 { 00:32:37.349 "job": "nvme0n1", 00:32:37.349 "core_mask": "0x2", 00:32:37.349 "workload": "randrw", 00:32:37.349 "percentage": 50, 00:32:37.349 "status": "finished", 00:32:37.349 "queue_depth": 128, 00:32:37.349 "io_size": 4096, 00:32:37.349 "runtime": 1.006367, 00:32:37.349 "iops": 11897.250207926134, 00:32:37.349 "mibps": 46.47363362471146, 00:32:37.349 "io_failed": 0, 00:32:37.349 "io_timeout": 0, 00:32:37.349 "avg_latency_us": 10730.953352922867, 00:32:37.349 "min_latency_us": 6285.498181818181, 00:32:37.349 "max_latency_us": 19541.643636363635 00:32:37.349 } 00:32:37.349 ], 00:32:37.349 "core_count": 1 00:32:37.349 } 00:32:37.349 13:26:25 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:37.349 13:26:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:37.607 13:26:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:32:37.607 13:26:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:37.607 13:26:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:37.607 13:26:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:37.607 13:26:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.607 13:26:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:37.864 13:26:26 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:37.864 13:26:26 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:32:37.864 13:26:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:37.864 13:26:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:37.864 13:26:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:37.864 13:26:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.864 13:26:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:38.121 13:26:26 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:32:38.121 13:26:26 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:38.121 13:26:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:38.121 13:26:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:38.121 13:26:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:38.121 13:26:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:38.121 13:26:26 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:38.121 13:26:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:38.121 13:26:26 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:38.121 13:26:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:38.379 [2024-12-06 13:26:26.964742] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:38.379 [2024-12-06 13:26:26.965516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f3190 (107): Transport endpoint is not connected 00:32:38.379 [2024-12-06 13:26:26.966504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f3190 (9): Bad file descriptor 00:32:38.379 [2024-12-06 13:26:26.967502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:38.379 [2024-12-06 13:26:26.967523] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:38.379 [2024-12-06 13:26:26.967534] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:38.379 [2024-12-06 13:26:26.967546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:38.379 2024/12/06 13:26:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:32:38.379 request: 00:32:38.379 { 00:32:38.379 "method": "bdev_nvme_attach_controller", 00:32:38.379 "params": { 00:32:38.379 "name": "nvme0", 00:32:38.379 "trtype": "tcp", 00:32:38.379 "traddr": "127.0.0.1", 00:32:38.379 "adrfam": "ipv4", 00:32:38.379 "trsvcid": "4420", 00:32:38.379 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:38.379 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:38.379 "prchk_reftag": false, 00:32:38.379 "prchk_guard": false, 00:32:38.379 "hdgst": false, 00:32:38.379 "ddgst": false, 00:32:38.379 "psk": "key1", 00:32:38.379 "allow_unrecognized_csi": false 00:32:38.379 } 00:32:38.379 } 00:32:38.379 Got JSON-RPC error response 00:32:38.379 GoRPCClient: error on JSON-RPC call 00:32:38.379 13:26:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:38.379 13:26:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:38.379 13:26:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:38.379 13:26:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:38.379 13:26:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:32:38.379 13:26:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:38.379 13:26:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:38.379 13:26:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:38.379 13:26:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:38.379 13:26:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:38.638 13:26:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:38.638 13:26:27 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:32:38.638 13:26:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:38.638 13:26:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:38.638 13:26:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:38.638 13:26:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:38.638 13:26:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:39.204 13:26:27 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:32:39.204 13:26:27 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:32:39.204 13:26:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:39.463 13:26:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:32:39.463 13:26:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:39.721 13:26:28 keyring_file -- keyring/file.sh@78 -- # jq length 00:32:39.721 13:26:28 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:32:39.721 13:26:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:39.980 13:26:28 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:32:39.980 13:26:28 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Bdv7VIlwwk 00:32:39.980 13:26:28 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Bdv7VIlwwk 00:32:39.980 13:26:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:39.980 13:26:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Bdv7VIlwwk 00:32:39.980 13:26:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:39.980 13:26:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:39.980 13:26:28 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:39.980 13:26:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:39.980 13:26:28 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Bdv7VIlwwk 00:32:39.980 13:26:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Bdv7VIlwwk 00:32:40.239 [2024-12-06 13:26:28.706810] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Bdv7VIlwwk': 0100660 00:32:40.239 [2024-12-06 13:26:28.706846] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:40.239 2024/12/06 13:26:28 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.Bdv7VIlwwk], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:32:40.239 request: 00:32:40.239 { 00:32:40.239 "method": "keyring_file_add_key", 00:32:40.239 "params": { 00:32:40.239 "name": "key0", 00:32:40.239 "path": "/tmp/tmp.Bdv7VIlwwk" 00:32:40.239 } 00:32:40.239 } 00:32:40.239 Got JSON-RPC error response 00:32:40.239 GoRPCClient: error on JSON-RPC call 00:32:40.239 13:26:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:40.239 13:26:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:40.239 13:26:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:40.239 13:26:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:40.239 13:26:28 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Bdv7VIlwwk 00:32:40.239 13:26:28 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Bdv7VIlwwk 00:32:40.239 13:26:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Bdv7VIlwwk 00:32:40.498 13:26:28 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Bdv7VIlwwk 00:32:40.498 13:26:29 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:32:40.498 13:26:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:40.498 13:26:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:40.498 13:26:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:40.498 13:26:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:40.498 13:26:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.756 13:26:29 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:32:40.756 13:26:29 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:40.756 13:26:29 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:40.756 13:26:29 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:40.756 13:26:29 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:40.756 13:26:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:40.756 13:26:29 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:40.756 13:26:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:40.756 13:26:29 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:40.756 13:26:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:41.022 [2024-12-06 13:26:29.507122] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Bdv7VIlwwk': No such file or directory 00:32:41.022 [2024-12-06 13:26:29.507171] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:41.022 [2024-12-06 13:26:29.507193] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:41.022 [2024-12-06 13:26:29.507205] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:32:41.022 [2024-12-06 13:26:29.507214] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:41.022 [2024-12-06 13:26:29.507223] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:41.022 2024/12/06 13:26:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:32:41.022 request: 00:32:41.022 { 00:32:41.022 "method": "bdev_nvme_attach_controller", 00:32:41.022 "params": { 00:32:41.022 "name": "nvme0", 00:32:41.022 "trtype": "tcp", 00:32:41.022 "traddr": "127.0.0.1", 00:32:41.022 "adrfam": "ipv4", 00:32:41.022 "trsvcid": "4420", 00:32:41.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:41.022 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:41.022 "prchk_reftag": false, 00:32:41.022 "prchk_guard": false, 00:32:41.022 "hdgst": false, 00:32:41.022 "ddgst": false, 00:32:41.022 "psk": "key0", 00:32:41.022 "allow_unrecognized_csi": false 00:32:41.022 } 00:32:41.022 } 00:32:41.022 Got JSON-RPC error response 00:32:41.022 GoRPCClient: error on JSON-RPC call 00:32:41.022 13:26:29 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:41.022 13:26:29 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:41.022 13:26:29 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:41.022 13:26:29 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:41.022 13:26:29 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:32:41.022 13:26:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:41.298 13:26:29 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:41.298 13:26:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:41.298 13:26:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:41.298 13:26:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:41.298 13:26:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:41.298 13:26:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:41.298 13:26:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kRkMbr1xtg 00:32:41.298 13:26:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:41.298 13:26:29 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:41.298 13:26:29 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:41.298 13:26:29 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:41.298 13:26:29 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:41.298 13:26:29 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:41.298 13:26:29 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:41.298 13:26:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kRkMbr1xtg 00:32:41.298 13:26:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kRkMbr1xtg 00:32:41.298 13:26:29 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.kRkMbr1xtg 00:32:41.298 13:26:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kRkMbr1xtg 00:32:41.298 13:26:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kRkMbr1xtg 00:32:41.865 13:26:30 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:41.865 13:26:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:42.123 nvme0n1 00:32:42.123 13:26:30 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:32:42.123 13:26:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:42.123 13:26:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:42.123 13:26:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:42.123 13:26:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:42.123 13:26:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:42.381 13:26:30 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:32:42.381 13:26:30 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:32:42.381 13:26:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:42.639 13:26:31 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:32:42.639 13:26:31 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:32:42.639 13:26:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:42.639 13:26:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:42.639 13:26:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:42.897 13:26:31 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:32:42.897 13:26:31 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:32:42.897 13:26:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:42.897 13:26:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:42.897 13:26:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:42.897 13:26:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:42.897 13:26:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:43.463 13:26:31 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:32:43.463 13:26:31 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:43.463 13:26:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:43.721 13:26:32 keyring_file -- keyring/file.sh@105 -- # jq length 00:32:43.721 13:26:32 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:32:43.721 13:26:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:43.979 13:26:32 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:32:43.979 13:26:32 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kRkMbr1xtg 00:32:43.980 13:26:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kRkMbr1xtg 00:32:44.238 13:26:32 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DIpSpoy7UB 00:32:44.238 13:26:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DIpSpoy7UB 00:32:44.495 13:26:32 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:44.495 13:26:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:44.753 nvme0n1 00:32:44.753 13:26:33 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:32:44.753 13:26:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:45.011 13:26:33 keyring_file -- keyring/file.sh@113 -- # config='{ 00:32:45.011 "subsystems": [ 00:32:45.011 { 00:32:45.011 "subsystem": "keyring", 00:32:45.011 "config": [ 00:32:45.011 { 00:32:45.011 "method": "keyring_file_add_key", 00:32:45.011 "params": { 00:32:45.011 "name": "key0", 00:32:45.011 "path": "/tmp/tmp.kRkMbr1xtg" 00:32:45.011 } 00:32:45.011 }, 00:32:45.011 { 00:32:45.011 "method": "keyring_file_add_key", 00:32:45.011 "params": { 00:32:45.011 "name": "key1", 00:32:45.011 "path": "/tmp/tmp.DIpSpoy7UB" 00:32:45.011 } 00:32:45.011 } 00:32:45.011 ] 00:32:45.011 }, 00:32:45.011 { 00:32:45.011 "subsystem": "iobuf", 00:32:45.011 "config": [ 00:32:45.011 { 00:32:45.011 "method": "iobuf_set_options", 00:32:45.011 "params": { 00:32:45.011 "enable_numa": false, 00:32:45.011 "large_bufsize": 135168, 00:32:45.011 "large_pool_count": 1024, 00:32:45.011 "small_bufsize": 8192, 00:32:45.011 "small_pool_count": 8192 00:32:45.011 } 00:32:45.011 } 00:32:45.011 ] 00:32:45.011 }, 00:32:45.011 { 00:32:45.011 "subsystem": "sock", 00:32:45.011 "config": [ 00:32:45.011 { 00:32:45.011 "method": "sock_set_default_impl", 00:32:45.011 "params": { 00:32:45.011 "impl_name": "posix" 00:32:45.011 } 00:32:45.011 }, 00:32:45.011 { 00:32:45.011 "method": "sock_impl_set_options", 00:32:45.011 "params": { 00:32:45.011 "enable_ktls": false, 00:32:45.011 "enable_placement_id": 0, 00:32:45.011 "enable_quickack": false, 00:32:45.011 "enable_recv_pipe": true, 00:32:45.011 "enable_zerocopy_send_client": false, 00:32:45.011 "enable_zerocopy_send_server": true, 00:32:45.011 "impl_name": "ssl", 00:32:45.011 "recv_buf_size": 4096, 00:32:45.011 "send_buf_size": 4096, 00:32:45.011 "tls_version": 0, 00:32:45.011 "zerocopy_threshold": 0 00:32:45.011 } 00:32:45.011 }, 00:32:45.012 { 00:32:45.012 "method": "sock_impl_set_options", 00:32:45.012 "params": { 00:32:45.012 "enable_ktls": false, 00:32:45.012 "enable_placement_id": 0, 00:32:45.012 "enable_quickack": false, 00:32:45.012 "enable_recv_pipe": true, 00:32:45.012 "enable_zerocopy_send_client": false, 00:32:45.012 "enable_zerocopy_send_server": true, 00:32:45.012 "impl_name": "posix", 00:32:45.012 "recv_buf_size": 2097152, 00:32:45.012 "send_buf_size": 2097152, 00:32:45.012 "tls_version": 0, 00:32:45.012 "zerocopy_threshold": 0 00:32:45.012 } 00:32:45.012 } 00:32:45.012 ] 00:32:45.012 }, 00:32:45.012 { 00:32:45.012 "subsystem": "vmd", 00:32:45.012 "config": [] 00:32:45.012 }, 00:32:45.012 { 00:32:45.012 "subsystem": "accel", 00:32:45.012 "config": [ 00:32:45.012 { 00:32:45.012 "method": "accel_set_options", 00:32:45.012 "params": { 00:32:45.012 "buf_count": 2048, 00:32:45.012 "large_cache_size": 16, 00:32:45.012 "sequence_count": 2048, 00:32:45.012 "small_cache_size": 128, 00:32:45.012 "task_count": 2048 00:32:45.012 } 00:32:45.012 } 00:32:45.012 ] 00:32:45.012 }, 00:32:45.012 { 00:32:45.012 "subsystem": "bdev", 00:32:45.012 "config": [ 00:32:45.012 { 00:32:45.012 "method": "bdev_set_options", 00:32:45.012 "params": { 00:32:45.012 "bdev_auto_examine": true, 00:32:45.012 "bdev_io_cache_size": 256, 00:32:45.012 "bdev_io_pool_size": 65535, 00:32:45.012 "iobuf_large_cache_size": 16, 00:32:45.012 "iobuf_small_cache_size": 128 00:32:45.012 } 00:32:45.012 }, 00:32:45.012 { 00:32:45.012 "method": "bdev_raid_set_options", 00:32:45.012 "params": { 00:32:45.012 "process_max_bandwidth_mb_sec": 0, 00:32:45.012 "process_window_size_kb": 1024 00:32:45.012 } 00:32:45.012 }, 00:32:45.012 { 00:32:45.012 "method": "bdev_iscsi_set_options", 00:32:45.012 "params": { 00:32:45.012 "timeout_sec": 30 00:32:45.012 } 00:32:45.012 }, 00:32:45.012 { 00:32:45.012 "method": "bdev_nvme_set_options", 00:32:45.012 "params": { 00:32:45.012 "action_on_timeout": "none", 00:32:45.012 "allow_accel_sequence": false, 00:32:45.012 "arbitration_burst": 0, 00:32:45.012 "bdev_retry_count": 3, 00:32:45.012 "ctrlr_loss_timeout_sec": 0, 00:32:45.012 "delay_cmd_submit": true, 00:32:45.012 "dhchap_dhgroups": [ 00:32:45.012 "null", 00:32:45.012 "ffdhe2048", 00:32:45.012 "ffdhe3072", 00:32:45.012 "ffdhe4096", 00:32:45.012 "ffdhe6144", 00:32:45.012 "ffdhe8192" 00:32:45.012 ], 00:32:45.012 "dhchap_digests": [ 00:32:45.012 "sha256", 00:32:45.012 "sha384", 00:32:45.012 "sha512" 00:32:45.012 ], 00:32:45.012 "disable_auto_failback": false, 00:32:45.012 "fast_io_fail_timeout_sec": 0, 00:32:45.012 "generate_uuids": false, 00:32:45.012 "high_priority_weight": 0, 00:32:45.012 "io_path_stat": false, 00:32:45.012 "io_queue_requests": 512, 00:32:45.012 "keep_alive_timeout_ms": 10000, 00:32:45.012 "low_priority_weight": 0, 00:32:45.012 "medium_priority_weight": 0, 00:32:45.012 "nvme_adminq_poll_period_us": 10000, 00:32:45.012 "nvme_error_stat": false, 00:32:45.012 "nvme_ioq_poll_period_us": 0, 00:32:45.012 "rdma_cm_event_timeout_ms": 0, 00:32:45.012 "rdma_max_cq_size": 0, 00:32:45.012 "rdma_srq_size": 0, 00:32:45.012 "reconnect_delay_sec": 0, 00:32:45.012 "timeout_admin_us": 0, 00:32:45.012 "timeout_us": 0, 00:32:45.012 "transport_ack_timeout": 0, 00:32:45.012 "transport_retry_count": 4, 00:32:45.012 "transport_tos": 0 00:32:45.012 } 00:32:45.012 }, 00:32:45.012 { 00:32:45.012 "method": "bdev_nvme_attach_controller", 00:32:45.012 "params": { 00:32:45.012 "adrfam": "IPv4", 00:32:45.012 "ctrlr_loss_timeout_sec": 0, 00:32:45.012 "ddgst": false, 00:32:45.012 "fast_io_fail_timeout_sec": 0, 00:32:45.012 "hdgst": false, 00:32:45.012 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:45.012 "multipath": "multipath", 00:32:45.012 "name": "nvme0", 00:32:45.012 "prchk_guard": false, 00:32:45.012 "prchk_reftag": false, 00:32:45.012 "psk": "key0", 00:32:45.012 "reconnect_delay_sec": 0, 00:32:45.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:45.012 "traddr": "127.0.0.1", 00:32:45.012 "trsvcid": "4420", 00:32:45.012 "trtype": "TCP" 00:32:45.012 } 00:32:45.012 }, 00:32:45.012 { 00:32:45.012 "method": "bdev_nvme_set_hotplug", 00:32:45.012 "params": { 00:32:45.012 "enable": false, 00:32:45.012 "period_us": 100000 00:32:45.012 } 00:32:45.012 }, 00:32:45.012 { 00:32:45.012 "method": "bdev_wait_for_examine" 00:32:45.012 } 00:32:45.012 ] 00:32:45.012 }, 00:32:45.012 { 00:32:45.012 "subsystem": "nbd", 00:32:45.012 "config": [] 00:32:45.012 } 00:32:45.012 ] 00:32:45.012 }' 00:32:45.012 13:26:33 keyring_file -- keyring/file.sh@115 -- # killprocess 111206 00:32:45.012 13:26:33 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 111206 ']' 00:32:45.012 13:26:33 keyring_file -- common/autotest_common.sh@958 -- # kill -0 111206 00:32:45.012 13:26:33 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:45.012 13:26:33 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:45.012 13:26:33 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111206 00:32:45.012 killing process with pid 111206 00:32:45.012 Received shutdown signal, test time was about 1.000000 seconds 00:32:45.012 00:32:45.012 Latency(us) 00:32:45.012 [2024-12-06T13:26:33.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.012 [2024-12-06T13:26:33.652Z] =================================================================================================================== 00:32:45.012 [2024-12-06T13:26:33.652Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:45.012 13:26:33 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:45.012 13:26:33 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:45.012 13:26:33 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111206' 00:32:45.012 13:26:33 keyring_file -- common/autotest_common.sh@973 -- # kill 111206 00:32:45.012 13:26:33 keyring_file -- common/autotest_common.sh@978 -- # wait 111206 00:32:45.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:45.272 13:26:33 keyring_file -- keyring/file.sh@118 -- # bperfpid=111677 00:32:45.272 13:26:33 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:45.272 13:26:33 keyring_file -- keyring/file.sh@120 -- # waitforlisten 111677 /var/tmp/bperf.sock 00:32:45.272 13:26:33 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 111677 ']' 00:32:45.272 13:26:33 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:45.272 13:26:33 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.272 13:26:33 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:32:45.272 "subsystems": [ 00:32:45.272 { 00:32:45.272 "subsystem": "keyring", 00:32:45.272 "config": [ 00:32:45.272 { 00:32:45.272 "method": "keyring_file_add_key", 00:32:45.272 "params": { 00:32:45.272 "name": "key0", 00:32:45.272 "path": "/tmp/tmp.kRkMbr1xtg" 00:32:45.272 } 00:32:45.272 }, 00:32:45.272 { 00:32:45.272 "method": "keyring_file_add_key", 00:32:45.272 "params": { 00:32:45.272 "name": "key1", 00:32:45.272 "path": "/tmp/tmp.DIpSpoy7UB" 00:32:45.272 } 00:32:45.272 } 00:32:45.272 ] 00:32:45.272 }, 00:32:45.272 { 00:32:45.272 "subsystem": "iobuf", 00:32:45.272 "config": [ 00:32:45.272 { 00:32:45.272 "method": "iobuf_set_options", 00:32:45.272 "params": { 00:32:45.272 "enable_numa": false, 00:32:45.272 "large_bufsize": 135168, 00:32:45.272 "large_pool_count": 1024, 00:32:45.272 "small_bufsize": 8192, 00:32:45.272 "small_pool_count": 8192 00:32:45.272 } 00:32:45.272 } 00:32:45.272 ] 00:32:45.272 }, 00:32:45.272 { 00:32:45.272 "subsystem": "sock", 00:32:45.272 "config": [ 00:32:45.272 { 00:32:45.272 "method": "sock_set_default_impl", 00:32:45.272 "params": { 00:32:45.272 "impl_name": "posix" 00:32:45.272 } 00:32:45.272 }, 00:32:45.272 { 00:32:45.272 "method": "sock_impl_set_options", 00:32:45.272 "params": { 00:32:45.272 "enable_ktls": false, 00:32:45.272 "enable_placement_id": 0, 00:32:45.272 "enable_quickack": false, 00:32:45.272 "enable_recv_pipe": true, 00:32:45.272 "enable_zerocopy_send_client": false, 00:32:45.272 "enable_zerocopy_send_server": true, 00:32:45.272 "impl_name": "ssl", 00:32:45.272 "recv_buf_size": 4096, 00:32:45.272 "send_buf_size": 4096, 00:32:45.272 "tls_version": 0, 00:32:45.272 "zerocopy_threshold": 0 00:32:45.272 } 00:32:45.272 }, 00:32:45.272 { 00:32:45.272 "method": "sock_impl_set_options", 00:32:45.272 "params": { 00:32:45.272 "enable_ktls": false, 00:32:45.272 "enable_placement_id": 0, 00:32:45.272 "enable_quickack": false, 00:32:45.272 "enable_recv_pipe": true, 00:32:45.272 "enable_zerocopy_send_client": false, 00:32:45.272 "enable_zerocopy_send_server": true, 00:32:45.272 "impl_name": "posix", 00:32:45.272 "recv_buf_size": 2097152, 00:32:45.272 "send_buf_size": 2097152, 00:32:45.272 "tls_version": 0, 00:32:45.272 "zerocopy_threshold": 0 00:32:45.272 } 00:32:45.272 } 00:32:45.272 ] 00:32:45.272 }, 00:32:45.272 { 00:32:45.272 "subsystem": "vmd", 00:32:45.272 "config": [] 00:32:45.272 }, 00:32:45.272 { 00:32:45.272 "subsystem": "accel", 00:32:45.272 "config": [ 00:32:45.272 { 00:32:45.272 "method": "accel_set_options", 00:32:45.272 "params": { 00:32:45.272 "buf_count": 2048, 00:32:45.272 "large_cache_size": 16, 00:32:45.272 "sequence_count": 2048, 00:32:45.272 "small_cache_size": 128, 00:32:45.272 "task_count": 2048 00:32:45.272 } 00:32:45.272 } 00:32:45.272 ] 00:32:45.272 }, 00:32:45.272 { 00:32:45.272 "subsystem": "bdev", 00:32:45.272 "config": [ 00:32:45.272 { 00:32:45.272 "method": "bdev_set_options", 00:32:45.272 "params": { 00:32:45.272 "bdev_auto_examine": true, 00:32:45.272 "bdev_io_cache_size": 256, 00:32:45.272 "bdev_io_pool_size": 65535, 00:32:45.272 "iobuf_large_cache_size": 16, 00:32:45.272 "iobuf_small_cache_size": 128 00:32:45.272 } 00:32:45.272 }, 00:32:45.272 { 00:32:45.272 "method": "bdev_raid_set_options", 00:32:45.272 "params": { 00:32:45.272 "process_max_bandwidth_mb_sec": 0, 00:32:45.272 "process_window_size_kb": 1024 00:32:45.272 } 00:32:45.272 }, 00:32:45.272 { 00:32:45.272 "method": "bdev_iscsi_set_options", 00:32:45.272 "params": { 00:32:45.272 "timeout_sec": 30 00:32:45.272 } 00:32:45.272 }, 00:32:45.272 { 00:32:45.272 "method": "bdev_nvme_set_options", 00:32:45.272 "params": { 00:32:45.272 "action_on_timeout": "none", 00:32:45.272 "allow_accel_sequence": false, 00:32:45.272 "arbitration_burst": 0, 00:32:45.272 "bdev_retry_count": 3, 00:32:45.272 "ctrlr_loss_timeout_sec": 0, 00:32:45.272 "delay_cmd_submit": true, 00:32:45.272 "dhchap_dhgroups": [ 00:32:45.272 "null", 00:32:45.272 "ffdhe2048", 00:32:45.272 "ffdhe3072", 00:32:45.272 "ffdhe4096", 00:32:45.272 "ffdhe6144", 00:32:45.272 "ffdhe8192" 00:32:45.272 ], 00:32:45.272 "dhchap_digests": [ 00:32:45.272 "sha256", 00:32:45.272 "sha384", 00:32:45.272 "sha512" 00:32:45.272 ], 00:32:45.272 "disable_auto_failback": false, 00:32:45.272 "fast_io_fail_timeout_sec": 0, 00:32:45.272 "generate_uuids": false, 00:32:45.272 "high_priority_weight": 0, 00:32:45.272 "io_path_stat": false, 00:32:45.272 "io_queue_requests": 512, 00:32:45.272 "keep_alive_timeout_ms": 10000, 00:32:45.272 "low_priority_weight": 0, 00:32:45.272 "medium_priority_weight": 0, 00:32:45.272 "nvme_adminq_poll_period_us": 10000, 00:32:45.272 "nvme_error_stat": false, 00:32:45.272 "nvme_ioq_poll_period_us": 0, 00:32:45.272 "rdma_cm_event_timeout_ms": 0, 00:32:45.272 "rdma_max_cq_size": 0, 00:32:45.273 "rdma_srq_size": 0, 00:32:45.273 "reconnect_delay_sec": 0, 00:32:45.273 "timeout_admin_us": 0, 00:32:45.273 "timeout_us": 0, 00:32:45.273 "transport_ack_timeout": 0, 00:32:45.273 "transport_retry_count": 4, 00:32:45.273 "transport_tos": 0 00:32:45.273 } 00:32:45.273 }, 00:32:45.273 { 00:32:45.273 "method": "bdev_nvme_attach_controller", 00:32:45.273 "params": { 00:32:45.273 "adrfam": "IPv4", 00:32:45.273 "ctrlr_loss_timeout_sec": 0, 00:32:45.273 "ddgst": false, 00:32:45.273 "fast_io_fail_timeout_sec": 0, 00:32:45.273 "hdgst": false, 00:32:45.273 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:45.273 "multipath": "multipath", 00:32:45.273 "name": "nvme0", 00:32:45.273 "prchk_guard": false, 00:32:45.273 "prchk_reftag": false, 00:32:45.273 "psk": "key0", 00:32:45.273 "reconnect_delay_sec": 0, 00:32:45.273 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:45.273 "traddr": "127.0.0.1", 00:32:45.273 "trsvcid": "4420", 00:32:45.273 "trtype": "TCP" 00:32:45.273 } 00:32:45.273 }, 00:32:45.273 { 00:32:45.273 "method": "bdev_nvme_set_hotplug", 00:32:45.273 "params": { 00:32:45.273 "enable": false, 00:32:45.273 "period_us": 100000 00:32:45.273 } 00:32:45.273 }, 00:32:45.273 { 00:32:45.273 "method": "bdev_wait_for_examine" 00:32:45.273 } 00:32:45.273 ] 00:32:45.273 }, 00:32:45.273 { 00:32:45.273 "subsystem": "nbd", 00:32:45.273 "config": [] 00:32:45.273 } 00:32:45.273 ] 00:32:45.273 }' 00:32:45.273 13:26:33 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:45.273 13:26:33 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.273 13:26:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:45.273 [2024-12-06 13:26:33.871887] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:32:45.273 [2024-12-06 13:26:33.872171] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111677 ] 00:32:45.531 [2024-12-06 13:26:34.014062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.531 [2024-12-06 13:26:34.076564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.789 [2024-12-06 13:26:34.269123] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:46.355 13:26:34 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.355 13:26:34 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:46.355 13:26:34 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:32:46.355 13:26:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:46.355 13:26:34 keyring_file -- keyring/file.sh@121 -- # jq length 00:32:46.622 13:26:35 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:46.622 13:26:35 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:32:46.622 13:26:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:46.622 13:26:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:46.622 13:26:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:46.622 13:26:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:46.622 13:26:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:47.189 13:26:35 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:32:47.189 13:26:35 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:32:47.189 13:26:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:47.189 13:26:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:47.189 13:26:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:47.189 13:26:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.189 13:26:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:47.189 13:26:35 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:32:47.189 13:26:35 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:32:47.189 13:26:35 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:32:47.189 13:26:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:47.755 13:26:36 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:32:47.755 13:26:36 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:47.755 13:26:36 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.kRkMbr1xtg /tmp/tmp.DIpSpoy7UB 00:32:47.755 13:26:36 keyring_file -- keyring/file.sh@20 -- # killprocess 111677 00:32:47.755 13:26:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 111677 ']' 00:32:47.755 13:26:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 111677 00:32:47.755 13:26:36 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:47.755 13:26:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.755 13:26:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111677 00:32:47.755 killing process with pid 111677 00:32:47.755 Received shutdown signal, test time was about 1.000000 seconds 00:32:47.755 00:32:47.755 Latency(us) 00:32:47.755 [2024-12-06T13:26:36.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.755 [2024-12-06T13:26:36.395Z] =================================================================================================================== 00:32:47.755 [2024-12-06T13:26:36.395Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:47.755 13:26:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:47.755 13:26:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:47.755 13:26:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111677' 00:32:47.755 13:26:36 keyring_file -- common/autotest_common.sh@973 -- # kill 111677 00:32:47.755 13:26:36 keyring_file -- common/autotest_common.sh@978 -- # wait 111677 00:32:48.014 13:26:36 keyring_file -- keyring/file.sh@21 -- # killprocess 111188 00:32:48.014 13:26:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 111188 ']' 00:32:48.014 13:26:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 111188 00:32:48.014 13:26:36 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:48.014 13:26:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:48.014 13:26:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111188 00:32:48.014 killing process with pid 111188 00:32:48.014 13:26:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:48.014 13:26:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:48.014 13:26:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111188' 00:32:48.014 13:26:36 keyring_file -- common/autotest_common.sh@973 -- # kill 111188 00:32:48.014 13:26:36 keyring_file -- common/autotest_common.sh@978 -- # wait 111188 00:32:48.588 00:32:48.588 real 0m16.533s 00:32:48.588 user 0m41.857s 00:32:48.588 sys 0m3.341s 00:32:48.588 13:26:37 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:48.588 13:26:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:48.588 ************************************ 00:32:48.588 END TEST keyring_file 00:32:48.588 ************************************ 00:32:48.588 13:26:37 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:32:48.588 13:26:37 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:32:48.588 13:26:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:48.588 13:26:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:48.588 13:26:37 -- common/autotest_common.sh@10 -- # set +x 00:32:48.588 ************************************ 00:32:48.588 START TEST keyring_linux 00:32:48.588 ************************************ 00:32:48.588 13:26:37 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:32:48.588 Joined session keyring: 690718608 00:32:48.588 * Looking for test storage... 00:32:48.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:32:48.588 13:26:37 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:48.588 13:26:37 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:32:48.588 13:26:37 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:48.862 13:26:37 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@345 -- # : 1 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:48.862 13:26:37 keyring_linux -- scripts/common.sh@368 -- # return 0 00:32:48.862 13:26:37 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:48.862 13:26:37 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:48.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.862 --rc genhtml_branch_coverage=1 00:32:48.862 --rc genhtml_function_coverage=1 00:32:48.862 --rc genhtml_legend=1 00:32:48.862 --rc geninfo_all_blocks=1 00:32:48.862 --rc geninfo_unexecuted_blocks=1 00:32:48.862 00:32:48.862 ' 00:32:48.862 13:26:37 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:48.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.863 --rc genhtml_branch_coverage=1 00:32:48.863 --rc genhtml_function_coverage=1 00:32:48.863 --rc genhtml_legend=1 00:32:48.863 --rc geninfo_all_blocks=1 00:32:48.863 --rc geninfo_unexecuted_blocks=1 00:32:48.863 00:32:48.863 ' 00:32:48.863 13:26:37 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:48.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.863 --rc genhtml_branch_coverage=1 00:32:48.863 --rc genhtml_function_coverage=1 00:32:48.863 --rc genhtml_legend=1 00:32:48.863 --rc geninfo_all_blocks=1 00:32:48.863 --rc geninfo_unexecuted_blocks=1 00:32:48.863 00:32:48.863 ' 00:32:48.863 13:26:37 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:48.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.863 --rc genhtml_branch_coverage=1 00:32:48.863 --rc genhtml_function_coverage=1 00:32:48.863 --rc genhtml_legend=1 00:32:48.863 --rc geninfo_all_blocks=1 00:32:48.863 --rc geninfo_unexecuted_blocks=1 00:32:48.863 00:32:48.863 ' 00:32:48.863 13:26:37 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02865f8e-a4d1-44f6-b22e-478c939aabc8 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=02865f8e-a4d1-44f6-b22e-478c939aabc8 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:48.863 13:26:37 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:32:48.863 13:26:37 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.863 13:26:37 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.863 13:26:37 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.863 13:26:37 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.863 13:26:37 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.863 13:26:37 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.863 13:26:37 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:48.863 13:26:37 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:48.863 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:48.863 13:26:37 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:48.863 13:26:37 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:48.863 13:26:37 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:48.863 13:26:37 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:48.863 13:26:37 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:48.863 13:26:37 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:48.863 /tmp/:spdk-test:key0 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:48.863 13:26:37 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:48.863 13:26:37 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:48.863 /tmp/:spdk-test:key1 00:32:48.863 13:26:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:48.863 13:26:37 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=111840 00:32:48.863 13:26:37 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:48.863 13:26:37 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 111840 00:32:48.863 13:26:37 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 111840 ']' 00:32:48.863 13:26:37 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.863 13:26:37 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:48.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.863 13:26:37 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.863 13:26:37 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:48.863 13:26:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:49.122 [2024-12-06 13:26:37.508104] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:32:49.122 [2024-12-06 13:26:37.508238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111840 ] 00:32:49.122 [2024-12-06 13:26:37.659284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.122 [2024-12-06 13:26:37.741376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.690 13:26:38 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:49.690 13:26:38 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:32:49.690 13:26:38 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:49.690 13:26:38 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.690 13:26:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:49.690 [2024-12-06 13:26:38.122951] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.690 null0 00:32:49.690 [2024-12-06 13:26:38.154945] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:49.690 [2024-12-06 13:26:38.155241] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:49.690 13:26:38 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.690 13:26:38 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:49.690 824644929 00:32:49.690 13:26:38 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:49.690 937824342 00:32:49.690 13:26:38 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=111864 00:32:49.690 13:26:38 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:49.690 13:26:38 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 111864 /var/tmp/bperf.sock 00:32:49.690 13:26:38 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 111864 ']' 00:32:49.690 13:26:38 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:49.690 13:26:38 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:49.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:49.690 13:26:38 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:49.690 13:26:38 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:49.690 13:26:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:49.690 [2024-12-06 13:26:38.235490] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:32:49.690 [2024-12-06 13:26:38.235593] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111864 ] 00:32:49.948 [2024-12-06 13:26:38.383383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.948 [2024-12-06 13:26:38.456679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.948 13:26:38 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:49.948 13:26:38 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:32:49.948 13:26:38 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:49.949 13:26:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:50.207 13:26:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:50.207 13:26:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:50.775 13:26:39 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:50.775 13:26:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:51.034 [2024-12-06 13:26:39.451010] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:51.034 nvme0n1 00:32:51.034 13:26:39 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:51.034 13:26:39 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:51.034 13:26:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:51.034 13:26:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:51.034 13:26:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:51.034 13:26:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.293 13:26:39 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:51.293 13:26:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:51.293 13:26:39 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:51.293 13:26:39 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:51.293 13:26:39 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:51.293 13:26:39 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:51.293 13:26:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.859 13:26:40 keyring_linux -- keyring/linux.sh@25 -- # sn=824644929 00:32:51.859 13:26:40 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:51.859 13:26:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:51.859 13:26:40 keyring_linux -- keyring/linux.sh@26 -- # [[ 824644929 == \8\2\4\6\4\4\9\2\9 ]] 00:32:51.859 13:26:40 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 824644929 00:32:51.859 13:26:40 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:51.859 13:26:40 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:51.859 Running I/O for 1 seconds... 00:32:52.796 8442.00 IOPS, 32.98 MiB/s 00:32:52.796 Latency(us) 00:32:52.796 [2024-12-06T13:26:41.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.796 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:52.796 nvme0n1 : 1.01 8443.62 32.98 0.00 0.00 15042.36 3544.90 17039.36 00:32:52.796 [2024-12-06T13:26:41.436Z] =================================================================================================================== 00:32:52.796 [2024-12-06T13:26:41.436Z] Total : 8443.62 32.98 0.00 0.00 15042.36 3544.90 17039.36 00:32:52.796 { 00:32:52.796 "results": [ 00:32:52.796 { 00:32:52.796 "job": "nvme0n1", 00:32:52.796 "core_mask": "0x2", 00:32:52.796 "workload": "randread", 00:32:52.796 "status": "finished", 00:32:52.796 "queue_depth": 128, 00:32:52.796 "io_size": 4096, 00:32:52.796 "runtime": 1.014967, 00:32:52.796 "iops": 8443.624275469054, 00:32:52.796 "mibps": 32.98290732605099, 00:32:52.796 "io_failed": 0, 00:32:52.796 "io_timeout": 0, 00:32:52.796 "avg_latency_us": 15042.356872812135, 00:32:52.796 "min_latency_us": 3544.9018181818183, 00:32:52.796 "max_latency_us": 17039.36 00:32:52.796 } 00:32:52.796 ], 00:32:52.796 "core_count": 1 00:32:52.796 } 00:32:52.796 13:26:41 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:52.796 13:26:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:53.055 13:26:41 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:53.055 13:26:41 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:53.055 13:26:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:53.055 13:26:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:53.055 13:26:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:53.055 13:26:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.621 13:26:42 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:53.621 13:26:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:53.621 13:26:42 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:53.621 13:26:42 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:53.621 13:26:42 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:32:53.621 13:26:42 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:53.621 13:26:42 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:53.621 13:26:42 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:53.621 13:26:42 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:53.621 13:26:42 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:53.621 13:26:42 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:53.621 13:26:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:53.882 [2024-12-06 13:26:42.351345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa230 (107):[2024-12-06 13:26:42.351347] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd Transport endpoint is not connected 00:32:53.882 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:53.882 [2024-12-06 13:26:42.352329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa230 (9): Bad file descriptor 00:32:53.882 [2024-12-06 13:26:42.353325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:53.882 [2024-12-06 13:26:42.353348] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:53.882 [2024-12-06 13:26:42.353359] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:53.882 [2024-12-06 13:26:42.353370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:53.882 2024/12/06 13:26:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:32:53.882 request: 00:32:53.882 { 00:32:53.882 "method": "bdev_nvme_attach_controller", 00:32:53.882 "params": { 00:32:53.882 "name": "nvme0", 00:32:53.882 "trtype": "tcp", 00:32:53.882 "traddr": "127.0.0.1", 00:32:53.882 "adrfam": "ipv4", 00:32:53.882 "trsvcid": "4420", 00:32:53.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:53.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:53.882 "prchk_reftag": false, 00:32:53.882 "prchk_guard": false, 00:32:53.882 "hdgst": false, 00:32:53.882 "ddgst": false, 00:32:53.882 "psk": ":spdk-test:key1", 00:32:53.882 "allow_unrecognized_csi": false 00:32:53.882 } 00:32:53.882 } 00:32:53.882 Got JSON-RPC error response 00:32:53.882 GoRPCClient: error on JSON-RPC call 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@33 -- # sn=824644929 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 824644929 00:32:53.882 1 links removed 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@33 -- # sn=937824342 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 937824342 00:32:53.882 1 links removed 00:32:53.882 13:26:42 keyring_linux -- keyring/linux.sh@41 -- # killprocess 111864 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 111864 ']' 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 111864 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111864 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:53.882 killing process with pid 111864 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111864' 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@973 -- # kill 111864 00:32:53.882 Received shutdown signal, test time was about 1.000000 seconds 00:32:53.882 00:32:53.882 Latency(us) 00:32:53.882 [2024-12-06T13:26:42.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.882 [2024-12-06T13:26:42.522Z] =================================================================================================================== 00:32:53.882 [2024-12-06T13:26:42.522Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:53.882 13:26:42 keyring_linux -- common/autotest_common.sh@978 -- # wait 111864 00:32:54.140 13:26:42 keyring_linux -- keyring/linux.sh@42 -- # killprocess 111840 00:32:54.140 13:26:42 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 111840 ']' 00:32:54.140 13:26:42 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 111840 00:32:54.140 13:26:42 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:32:54.140 13:26:42 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:54.140 13:26:42 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111840 00:32:54.140 13:26:42 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:54.140 13:26:42 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:54.140 killing process with pid 111840 00:32:54.140 13:26:42 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111840' 00:32:54.140 13:26:42 keyring_linux -- common/autotest_common.sh@973 -- # kill 111840 00:32:54.140 13:26:42 keyring_linux -- common/autotest_common.sh@978 -- # wait 111840 00:32:54.707 00:32:54.707 real 0m6.161s 00:32:54.707 user 0m11.791s 00:32:54.707 sys 0m1.872s 00:32:54.707 13:26:43 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:54.707 13:26:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:54.707 ************************************ 00:32:54.707 END TEST keyring_linux 00:32:54.707 ************************************ 00:32:54.707 13:26:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:54.707 13:26:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:54.707 13:26:43 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:54.707 13:26:43 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:54.707 13:26:43 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:54.707 13:26:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:54.707 13:26:43 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:54.707 13:26:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:54.707 13:26:43 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:54.707 13:26:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:54.707 13:26:43 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:54.707 13:26:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:54.707 13:26:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:54.707 13:26:43 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:54.707 13:26:43 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:54.707 13:26:43 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:54.707 13:26:43 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:54.707 13:26:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:54.707 13:26:43 -- common/autotest_common.sh@10 -- # set +x 00:32:54.707 13:26:43 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:54.707 13:26:43 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:54.707 13:26:43 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:54.707 13:26:43 -- common/autotest_common.sh@10 -- # set +x 00:32:56.701 INFO: APP EXITING 00:32:56.701 INFO: killing all VMs 00:32:56.701 INFO: killing vhost app 00:32:56.701 INFO: EXIT DONE 00:32:57.633 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:57.633 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:57.633 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:58.199 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:58.199 Cleaning 00:32:58.199 Removing: /var/run/dpdk/spdk0/config 00:32:58.199 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:58.199 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:58.199 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:58.199 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:58.199 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:58.199 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:58.199 Removing: /var/run/dpdk/spdk1/config 00:32:58.199 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:58.199 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:58.199 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:58.199 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:58.199 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:58.199 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:58.456 Removing: /var/run/dpdk/spdk2/config 00:32:58.456 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:58.456 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:58.456 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:58.456 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:58.456 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:58.456 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:58.456 Removing: /var/run/dpdk/spdk3/config 00:32:58.456 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:58.456 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:58.456 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:58.456 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:58.456 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:58.456 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:58.456 Removing: /var/run/dpdk/spdk4/config 00:32:58.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:58.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:58.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:58.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:58.456 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:58.456 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:58.456 Removing: /dev/shm/nvmf_trace.0 00:32:58.456 Removing: /dev/shm/spdk_tgt_trace.pid58730 00:32:58.456 Removing: /var/run/dpdk/spdk0 00:32:58.456 Removing: /var/run/dpdk/spdk1 00:32:58.456 Removing: /var/run/dpdk/spdk2 00:32:58.456 Removing: /var/run/dpdk/spdk3 00:32:58.456 Removing: /var/run/dpdk/spdk4 00:32:58.456 Removing: /var/run/dpdk/spdk_pid101630 00:32:58.456 Removing: /var/run/dpdk/spdk_pid101672 00:32:58.456 Removing: /var/run/dpdk/spdk_pid102029 00:32:58.456 Removing: /var/run/dpdk/spdk_pid102075 00:32:58.456 Removing: /var/run/dpdk/spdk_pid102493 00:32:58.456 Removing: /var/run/dpdk/spdk_pid103073 00:32:58.457 Removing: /var/run/dpdk/spdk_pid103497 00:32:58.457 Removing: /var/run/dpdk/spdk_pid104525 00:32:58.457 Removing: /var/run/dpdk/spdk_pid105573 00:32:58.457 Removing: /var/run/dpdk/spdk_pid105681 00:32:58.457 Removing: /var/run/dpdk/spdk_pid105749 00:32:58.457 Removing: /var/run/dpdk/spdk_pid107342 00:32:58.457 Removing: /var/run/dpdk/spdk_pid107664 00:32:58.457 Removing: /var/run/dpdk/spdk_pid108002 00:32:58.457 Removing: /var/run/dpdk/spdk_pid108554 00:32:58.457 Removing: /var/run/dpdk/spdk_pid108560 00:32:58.457 Removing: /var/run/dpdk/spdk_pid108955 00:32:58.457 Removing: /var/run/dpdk/spdk_pid109111 00:32:58.457 Removing: /var/run/dpdk/spdk_pid109267 00:32:58.457 Removing: /var/run/dpdk/spdk_pid109360 00:32:58.457 Removing: /var/run/dpdk/spdk_pid109515 00:32:58.457 Removing: /var/run/dpdk/spdk_pid109630 00:32:58.457 Removing: /var/run/dpdk/spdk_pid110339 00:32:58.457 Removing: /var/run/dpdk/spdk_pid110371 00:32:58.457 Removing: /var/run/dpdk/spdk_pid110405 00:32:58.457 Removing: /var/run/dpdk/spdk_pid110656 00:32:58.457 Removing: /var/run/dpdk/spdk_pid110691 00:32:58.457 Removing: /var/run/dpdk/spdk_pid110721 00:32:58.457 Removing: /var/run/dpdk/spdk_pid111188 00:32:58.457 Removing: /var/run/dpdk/spdk_pid111206 00:32:58.457 Removing: /var/run/dpdk/spdk_pid111677 00:32:58.457 Removing: /var/run/dpdk/spdk_pid111840 00:32:58.457 Removing: /var/run/dpdk/spdk_pid111864 00:32:58.457 Removing: /var/run/dpdk/spdk_pid58572 00:32:58.457 Removing: /var/run/dpdk/spdk_pid58730 00:32:58.457 Removing: /var/run/dpdk/spdk_pid59004 00:32:58.457 Removing: /var/run/dpdk/spdk_pid59092 00:32:58.457 Removing: /var/run/dpdk/spdk_pid59137 00:32:58.457 Removing: /var/run/dpdk/spdk_pid59241 00:32:58.457 Removing: /var/run/dpdk/spdk_pid59263 00:32:58.457 Removing: /var/run/dpdk/spdk_pid59397 00:32:58.457 Removing: /var/run/dpdk/spdk_pid59687 00:32:58.457 Removing: /var/run/dpdk/spdk_pid59871 00:32:58.457 Removing: /var/run/dpdk/spdk_pid59956 00:32:58.457 Removing: /var/run/dpdk/spdk_pid60056 00:32:58.457 Removing: /var/run/dpdk/spdk_pid60151 00:32:58.457 Removing: /var/run/dpdk/spdk_pid60184 00:32:58.457 Removing: /var/run/dpdk/spdk_pid60214 00:32:58.457 Removing: /var/run/dpdk/spdk_pid60289 00:32:58.457 Removing: /var/run/dpdk/spdk_pid60371 00:32:58.457 Removing: /var/run/dpdk/spdk_pid61007 00:32:58.457 Removing: /var/run/dpdk/spdk_pid61071 00:32:58.457 Removing: /var/run/dpdk/spdk_pid61140 00:32:58.457 Removing: /var/run/dpdk/spdk_pid61154 00:32:58.457 Removing: /var/run/dpdk/spdk_pid61239 00:32:58.457 Removing: /var/run/dpdk/spdk_pid61267 00:32:58.457 Removing: /var/run/dpdk/spdk_pid61346 00:32:58.457 Removing: /var/run/dpdk/spdk_pid61360 00:32:58.457 Removing: /var/run/dpdk/spdk_pid61412 00:32:58.714 Removing: /var/run/dpdk/spdk_pid61434 00:32:58.714 Removing: /var/run/dpdk/spdk_pid61480 00:32:58.714 Removing: /var/run/dpdk/spdk_pid61496 00:32:58.714 Removing: /var/run/dpdk/spdk_pid61658 00:32:58.714 Removing: /var/run/dpdk/spdk_pid61694 00:32:58.714 Removing: /var/run/dpdk/spdk_pid61771 00:32:58.714 Removing: /var/run/dpdk/spdk_pid62236 00:32:58.714 Removing: /var/run/dpdk/spdk_pid62598 00:32:58.714 Removing: /var/run/dpdk/spdk_pid65108 00:32:58.714 Removing: /var/run/dpdk/spdk_pid65154 00:32:58.714 Removing: /var/run/dpdk/spdk_pid65499 00:32:58.714 Removing: /var/run/dpdk/spdk_pid65545 00:32:58.714 Removing: /var/run/dpdk/spdk_pid65947 00:32:58.714 Removing: /var/run/dpdk/spdk_pid66526 00:32:58.714 Removing: /var/run/dpdk/spdk_pid66982 00:32:58.714 Removing: /var/run/dpdk/spdk_pid68064 00:32:58.714 Removing: /var/run/dpdk/spdk_pid69153 00:32:58.714 Removing: /var/run/dpdk/spdk_pid69270 00:32:58.714 Removing: /var/run/dpdk/spdk_pid69343 00:32:58.714 Removing: /var/run/dpdk/spdk_pid70953 00:32:58.714 Removing: /var/run/dpdk/spdk_pid71302 00:32:58.714 Removing: /var/run/dpdk/spdk_pid75191 00:32:58.714 Removing: /var/run/dpdk/spdk_pid75623 00:32:58.714 Removing: /var/run/dpdk/spdk_pid76285 00:32:58.714 Removing: /var/run/dpdk/spdk_pid76830 00:32:58.714 Removing: /var/run/dpdk/spdk_pid82747 00:32:58.714 Removing: /var/run/dpdk/spdk_pid83259 00:32:58.714 Removing: /var/run/dpdk/spdk_pid83363 00:32:58.714 Removing: /var/run/dpdk/spdk_pid83508 00:32:58.714 Removing: /var/run/dpdk/spdk_pid83566 00:32:58.714 Removing: /var/run/dpdk/spdk_pid83619 00:32:58.714 Removing: /var/run/dpdk/spdk_pid83658 00:32:58.714 Removing: /var/run/dpdk/spdk_pid83814 00:32:58.714 Removing: /var/run/dpdk/spdk_pid83977 00:32:58.714 Removing: /var/run/dpdk/spdk_pid84265 00:32:58.714 Removing: /var/run/dpdk/spdk_pid84381 00:32:58.714 Removing: /var/run/dpdk/spdk_pid84632 00:32:58.714 Removing: /var/run/dpdk/spdk_pid84744 00:32:58.714 Removing: /var/run/dpdk/spdk_pid84881 00:32:58.714 Removing: /var/run/dpdk/spdk_pid85282 00:32:58.714 Removing: /var/run/dpdk/spdk_pid85744 00:32:58.714 Removing: /var/run/dpdk/spdk_pid85745 00:32:58.714 Removing: /var/run/dpdk/spdk_pid85746 00:32:58.714 Removing: /var/run/dpdk/spdk_pid86019 00:32:58.714 Removing: /var/run/dpdk/spdk_pid86284 00:32:58.714 Removing: /var/run/dpdk/spdk_pid86711 00:32:58.714 Removing: /var/run/dpdk/spdk_pid87071 00:32:58.715 Removing: /var/run/dpdk/spdk_pid87665 00:32:58.715 Removing: /var/run/dpdk/spdk_pid87673 00:32:58.715 Removing: /var/run/dpdk/spdk_pid88068 00:32:58.715 Removing: /var/run/dpdk/spdk_pid88088 00:32:58.715 Removing: /var/run/dpdk/spdk_pid88102 00:32:58.715 Removing: /var/run/dpdk/spdk_pid88135 00:32:58.715 Removing: /var/run/dpdk/spdk_pid88144 00:32:58.715 Removing: /var/run/dpdk/spdk_pid88538 00:32:58.715 Removing: /var/run/dpdk/spdk_pid88585 00:32:58.715 Removing: /var/run/dpdk/spdk_pid88974 00:32:58.715 Removing: /var/run/dpdk/spdk_pid89227 00:32:58.715 Removing: /var/run/dpdk/spdk_pid89762 00:32:58.715 Removing: /var/run/dpdk/spdk_pid90385 00:32:58.715 Removing: /var/run/dpdk/spdk_pid91820 00:32:58.715 Removing: /var/run/dpdk/spdk_pid92474 00:32:58.715 Removing: /var/run/dpdk/spdk_pid92480 00:32:58.715 Removing: /var/run/dpdk/spdk_pid94547 00:32:58.715 Removing: /var/run/dpdk/spdk_pid94624 00:32:58.715 Removing: /var/run/dpdk/spdk_pid94714 00:32:58.715 Removing: /var/run/dpdk/spdk_pid94811 00:32:58.715 Removing: /var/run/dpdk/spdk_pid94975 00:32:58.715 Removing: /var/run/dpdk/spdk_pid95046 00:32:58.715 Removing: /var/run/dpdk/spdk_pid95141 00:32:58.715 Removing: /var/run/dpdk/spdk_pid95218 00:32:58.715 Removing: /var/run/dpdk/spdk_pid95598 00:32:58.715 Removing: /var/run/dpdk/spdk_pid96350 00:32:58.715 Removing: /var/run/dpdk/spdk_pid97776 00:32:58.715 Removing: /var/run/dpdk/spdk_pid97977 00:32:58.715 Removing: /var/run/dpdk/spdk_pid98249 00:32:58.715 Removing: /var/run/dpdk/spdk_pid98783 00:32:58.715 Removing: /var/run/dpdk/spdk_pid99184 00:32:58.715 Clean 00:32:58.971 13:26:47 -- common/autotest_common.sh@1453 -- # return 0 00:32:58.971 13:26:47 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:58.971 13:26:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:58.971 13:26:47 -- common/autotest_common.sh@10 -- # set +x 00:32:58.971 13:26:47 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:58.971 13:26:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:58.971 13:26:47 -- common/autotest_common.sh@10 -- # set +x 00:32:58.971 13:26:47 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:58.971 13:26:47 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:58.971 13:26:47 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:58.971 13:26:47 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:58.971 13:26:47 -- spdk/autotest.sh@398 -- # hostname 00:32:58.971 13:26:47 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:59.228 geninfo: WARNING: invalid characters removed from testname! 00:33:31.303 13:27:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:31.303 13:27:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:33.203 13:27:21 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:36.490 13:27:24 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:39.020 13:27:27 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:42.380 13:27:30 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:44.912 13:27:33 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:44.912 13:27:33 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:44.912 13:27:33 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:33:44.912 13:27:33 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:44.912 13:27:33 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:44.912 13:27:33 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:44.912 + [[ -n 5255 ]] 00:33:44.912 + sudo kill 5255 00:33:44.921 [Pipeline] } 00:33:44.938 [Pipeline] // timeout 00:33:44.944 [Pipeline] } 00:33:44.959 [Pipeline] // stage 00:33:44.964 [Pipeline] } 00:33:44.980 [Pipeline] // catchError 00:33:44.990 [Pipeline] stage 00:33:44.992 [Pipeline] { (Stop VM) 00:33:45.005 [Pipeline] sh 00:33:45.286 + vagrant halt 00:33:48.581 ==> default: Halting domain... 00:33:55.222 [Pipeline] sh 00:33:55.501 + vagrant destroy -f 00:33:59.686 ==> default: Removing domain... 00:33:59.697 [Pipeline] sh 00:33:59.975 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:33:59.983 [Pipeline] } 00:33:59.997 [Pipeline] // stage 00:34:00.002 [Pipeline] } 00:34:00.013 [Pipeline] // dir 00:34:00.018 [Pipeline] } 00:34:00.031 [Pipeline] // wrap 00:34:00.035 [Pipeline] } 00:34:00.046 [Pipeline] // catchError 00:34:00.054 [Pipeline] stage 00:34:00.056 [Pipeline] { (Epilogue) 00:34:00.067 [Pipeline] sh 00:34:00.341 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:08.464 [Pipeline] catchError 00:34:08.466 [Pipeline] { 00:34:08.478 [Pipeline] sh 00:34:08.758 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:09.018 Artifacts sizes are good 00:34:09.027 [Pipeline] } 00:34:09.044 [Pipeline] // catchError 00:34:09.061 [Pipeline] archiveArtifacts 00:34:09.070 Archiving artifacts 00:34:09.192 [Pipeline] cleanWs 00:34:09.204 [WS-CLEANUP] Deleting project workspace... 00:34:09.204 [WS-CLEANUP] Deferred wipeout is used... 00:34:09.211 [WS-CLEANUP] done 00:34:09.213 [Pipeline] } 00:34:09.229 [Pipeline] // stage 00:34:09.234 [Pipeline] } 00:34:09.248 [Pipeline] // node 00:34:09.254 [Pipeline] End of Pipeline 00:34:09.280 Finished: SUCCESS